EPBCS PBCS Applications What is the Maximum Number Of Dimensions that can be Added to the ASO Plan Type


What is the max number of dimension allowed in an ASO Plan type  in  EPBCS and PBCS applications?

Maximum number of dimensions in EPBCS and PBCS application is 32 and an application can have  4 ASO cubes in an application.

The combined limit is 32.

Thanks,

~KKT~

Advertisements

Smartview Essbase client support with SSL


Does Smartview Essbase client support SSL?

Yes it does.

If you want to use an Essbase client with SSL, then you can use the Smartview client which supports SSL.

You need to configure the SmartView client to connect via HTTPs to the Analytic Provider Services (APS).

APS will access Essbase on secure port.

Thanks,

~KKT~

Troubleshoot SmartView Performance Issues


Symptoms

Typically, the problem will arise only with large spreadsheets containing large numbers of SmartView cells. It may only occur when Refresh All is executed but not when worksheets are refreshed individually, or it may only occur at times of high load such as month end. It is more likely to arise with SmartView functions than Adhoc, because Adhocs are more efficient than functions. Such symptoms as these would point to a performance issue rather than a bug. However, if IIS logs show Deadlock detected, this is probably sign of a bug, where SmartView crashes IIS on the web server. Symptoms here would be SmartView becoming unresponsive and new users being unable to log on for a while. Restarting the web server resolves. This situation is outside the scope of this document.

Support

Once an actual bug has been eliminated, the issue must be treated as a performance problem. This lies outside the remit of your contract with Oracle Product Support. You may decide to contact a consultant or to try to tune the system yourself. Please understand that Support can only give you general and generic guidance in this. We will not read logs or make recommendations.

Background

SmartView communicates over HTTP using .NET. This just means that requests and responses are sent in a standard XML format that is tucked in the HTTP headers. The mechanism is the same as when your internet browser requests a file or submits a form (simplification). A standard Microsoft component that is part of Internet Explorer is used for this.

There are three components (four if you include the network) in a SmartView refresh:

  • the client
  • the web server
  • the HFM application server

It is easiest to troubleshoot when they are on separate machines. Running SmartView on the server to see whether it works better is a useful test to see what effect the network is having, but doesn’t show any other bottlenecks. If you run Task Manager on these 3 machines showing the Performance tab, you will see initial activity on the client as Excel clears the cells and the request is converted into XML. Then you see the web server gets busy as it receives and parses the request. Then the application server is busy as it sends the data, then the web server is busy as it compiles the results into XML again, and finally the client parses the XML, pushes the data into cells and runs an Excel calculation. This last stage is the longest of all. Overall, it is typical to see the client doing 90-95% of the work, so client performance is a very important factor. Look out for Excel’s memory usage increasing and RAM becoming in short supply. Often, user dissatisfaction with refresh times can be alleviated by installing more RAM or providing a faster workstation.

The situation we are dealing with here, however, is when the chain of events described above is interrupted. In this case, you will see the client or the web server wait in vain for a response from the next machine in the chain. The case of invalid XML is similar, but the response is truncated so that the XML is not well-formed when it arrives. XML consists of matching pairs of tags and is not well-formed when a closing tag is missing. This can be caused by certain network settings such as packet size (see later). The end result in either case is the same: as Excel cleared the cells before sending the request, cells are left blank or with zeroes.

The task now is to determine which particular component is failing.

Client

Check the HTTP timeout in the Windows registry. Example:

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]\
“ReceiveTimeout”=dword:00dbba00
“KeepAliveTimeout”=dword:002BF20
“ServerInfoTimeout”=dword:002BF20

The default time-out limit is five minutes for versions 4.0 and 4.01 and is 60 minutes for versions 5.x and 6. Also, the default time-out limit is 30 seconds for Windows Internet Explorer 7 and 8. To change the settings, add the above keys. The above values are shown in hexadecimal, not decimal and correspond to 180,000 milliseconds (3 minutes) and 240 minutes for the receive timeout.

For more information about client timeout settings see following articles:
How to change the default keep-alive time-out value in Internet Explorer – http://support.microsoft.com/kb/813827
Internet Explorer error “connection timed out” when server does not respond – http://support.microsoft.com/kb/181050

You must determine an adequate timeout limit for your environment and largest workbook by trial and error or experience. Once these changes are applied, my recommendation is to reboot the client machine.

Web server

Look for errors in logs showing that the web app didn’t respond/disconnected or timed out waiting for another application (i.e. the HFM application server). Eliminate load balancers etc. during testing in order to isolate the problem and to ensure that the user always goes to the same web server. Otherwise, logs can be hard to interpret. See also the IIS Tuning Guide for IIS settings. We cannot recommend particular settings, as each client must establish the optimal settings for their own environment after performance testing. If necessary, engage the services of a suitably qualified consultant.

Application Server

HFM application server performance is very dependent on whether the data is already in RAM or not. If  the first refresh fails but the second one succeeds, this is probably a sign that the memory cache needed to be populated, which had been done during the first refresh. HFM loads whole subcubes (Scenario, Year, Value and Entity) into cache, so requesting large number of accounts in the same subcube quickly returns results, but if large number of entities is requested the results are returned slowly as the server runs out of available RAM and starts unloading subcubes to make room for more. The HFM event log (HsvEventLog.log) shows this happening, but don’t forget that the messages you see are a normal consequence of the way HFM works and are not a problem unless you see a hugely elevated number of them during the period the refresh occurs. Another sign would be page file thrashing. If there are several servers in a cluster, use sticky sessions in HFM to make sure tests always go the same server, or hit one server directly rather than using the cluster.

Solutions here involve installing more RAM on the server and changing the worksheet design to restrict the number of subcubes accessed. A separate consolidation server, so that reads are not competing with intensive consolidations and calculations, would be a good thing.

Network

To find out if packets get fragmented, use
         ping -l size -f

To send a big packet with the ‘don’t fragment’  flag. E.g.
        ping -l 1024 -f oracle.com

If you see a reply like Reply from 141.146.8.66: bytes=1024 time=200ms TTL=245 then the size you specified is below the packet limit of the network. But if you use, for example, 2000 in place of 1024, and you get Packet needs to be fragmented but DF set, then you know you have exceeded the limit. Combine this investigation with a Fiddler trace (https://www.fiddler2.com/fiddler2/) to see what size data messages are being sent and received.

Troubleshooting Guidance

  1. During troubleshooting process, fix your environment so that it is predictable. Register a single HFM application server and connect to that. Use a single web server and don’t go through Apache or any other redirector. Make sure we know which machine to go to for logs. This would be a temporary measure and it doesn’t stop the other users using the full, load balanced system so long as it doesn’t involve your server.
  2. As far as possible keep other users from using it during testing, unless you are specifically testing for performance under load. We want to see what activity is due to SmartView and not random other jobs.
  3. Use PerfMon on all three machines: client, web server and application server.
  4. Clear Logs before testing and note the times of each test so the logs can be correlated.
  5. Log CPU activity, memory usage, network throughput, page faults, thread count etc. I’m no expert on this so get advice from someone who is.
  6. In addition to the performance logs, from the IIS Server get:
    a. HTTP.SYS Error Log – %windir%\System32\LogFiles\HTTPERR (Default location; configurable)
    b. IIS Website Log – %windir%\System32\LogFiles\W3SVC# (Default location; configurable)
    c. Event Log (both System and Application) in EVT or EVTX and TXT (tab separated) formats
  7. From HFM Application server get:
    a. HsvEventLog.log (in <HFM install>\Server working Folder)
    b. Event Log (both System and Application) in EVT and TXT (tab separated) formats
  8. Be aware of all timeout settings in your environment (i.e. IIS & the network) and time your tests. This can show whether it is an ASP timeout, script timeout, client side timeout or whatever. Note that the Web Session Timeout setting for SmartView in HFM’s Server and Web Configuration is a separate issue from IIS timeouts. Web Session Timeout controls the length of idle time before SmartView makes the user log in again. Default 20 mins.
  9. Run Fiddler traces to directly observe the XML and check for HTTP failures.

Essbase Studio vs Drillbridge


I get asked a lot about implementing drill-through with Drillbridge versus implementing with Essbase Studio. First of all, this question itself is kind of interesting to me – because many people look at Studio as a means to an end simply for achieving drill-through. Drill-through is one of the aspects of Essbase Studio, but certainly not the whole story.

Essbase Studio is a good tool. It’s the successor to Essbase Integration Services, which I was a huge fan of, and my blog articles from yesteryear continue to get a lot of web traffic, owing at least partly to their being very few EIS articles on the web.

In terms of implementing solutions with Essbase Studio, the most common scenario I see at organizations is that they want drill-through, and this alone drives the decision to use Essbase Studio. So the developer starts going through development: creating fact tables, dimension tables, foundational Essbase Studio elements, cube models, and deploying a cube.

Let’s take things a step further and think about the pieces involved with automating and managing the cube: some MaxL to deploy it via automation, ETL processes to load and update the dimension/fact tables, creating the drill-through reports with custom SQL, writing calc scripts, and more.

The point of all this isn’t to make Studio seem like a lot of work. It’s a reasonable amount of work if Essbase Studio solutions fit into your organization (more on that in a moment). I think it’s an unreasonable amount of work if you only want drill-through.

Implementing a solution with Studio just to get drill-through is what I sometimes call “backdoor drill-through” – basically overlaying an existing solution with Studio drill-through just so you can get drill-through, but not otherwise leveraging much of anything that Essbase Studio brings to the table.

That said, I think if your organization has a “strong” relational database model (e.g., a well-designed database designed for some system that has proper primary/foreign keys, constraints, and so on), plus some other unit of the business takes care of updating it on a regular basis, I think that’s a really good use-case for building out a cube with Essbase Studio. But why?

Again, it seems like most solutions built with Essbase Studio are built from tables and data specifically created for Studio – in other words, someone has to endeavor to design ETL processes to load tables from wherever that are specifically formatted for Studio and the cube to be built with it. One of the problems with this approach, however, is that many of us are cube experts and not necessarily relational database experts. You don’t have to be a relational database expert to make use of Studio, but it won’t hurt – in fact, it can help quite a bit.

So again, I think Essbase Studio can be a great choice to spin up multiple cubes when you are tapping in to an existing relational model. Not as much if you have to create that model yourself. Essbase Studio solutions should feel “natural” – you should be able to get away with the default data load code and stock drill-through code if you have designed everything correctly. All too often there is a custom data load and completely custom drill-through (which is often a sign that the data model is incorrectly built).

All this is why Drillbridge really shines in terms of drill-through ROI: You can keep your existing cube, automation, administration, and everything. Your relational data can come from anywhere. You can build completely custom web reports, drill from drill reports to other reports, and more. Drill-through on those dynamic calc YTD members in your alternate Time hierarchy are super easy to support. There is no 1,000 item limit on Oracle drill-to-bottom queries, and more.

But best of all: fewer moving parts, rapid deployment, and keep your processes exactly the same. I think this really resonates with organizations around the world. The free edition of Drillbridge is now in production at over 30 companies around the world (probably more), and there is a growing list of happy Drillbridge Enterprise customers. Even better: I get emails every single day from people that love Drillbridge and are enhancing the value of their Essbase solutions for their users.

Essbase ASO Application Crashing When Retrieving Data


When retrieving data from an ASO application, the application may crash on certain retrievals.

This is due to Internal optimization being executed on MDX formulas.

If users are experiencing intermittent crashes while retrieving on an ASO application, set the following parameter in the essbase.cfg file:

FORMULAOPTLEVEL 2

FORMULAOPTLEVEL is an undocumented Essbase configuration setting that controls how MDX formulas are executed in Essbase. Essbase optimizes execution of MDX formulas by analyzing various MDX expression patterns used in a formula. The FORMULAOPTLEVEL configuration setting selectively turns off some of these optimizations. Turning off some of the optimizations may help in resolving the application crashes.

Thanks,
~KKT~

How to Login To Financial Reporting Studio When Using Planning and Budgeting Cloud Services (PBCS)


What is the format to login to Hyperion Financial Reporting (FR) Studio when using Planning and Budgeting Cloud Services (PBCS).
To access FR Studio the user name must be entered in IDENTITY_DOMAIN.USER_ID format, and Domains should be the Domain URL (without the port):

User name: IdentityDomian.UserID
Password:
Domain URL: https://DomainName.oraclecloud.com
This is different format than EPM Workspace login, which is:

Username:UserID
Password:
Domain:DomainName

Also check the below docs for reference.
http://docs.oracle.com/cloud/latest/pbcs_common/CSPGS/ch01s09.html

Thanks,
~KKT~

Dimension Build Settings…..


Dimension Build Setting Tab:

1. Here you will specify which Essbase dimension or dimensions you are building with this rules file. Yes, you can build two dimensions with one rules file, this is more advanced, so we’ll leave it for another topic.

2. If the member is already existing, do you want to allow moves (new location in heirarchy), allow property changes (change the aggregation from ~ to a +), or allow formula changes. Other options will appear based on given dimensions selected, like UDA changes.

3. Build Method – this is where you specifiy whether or not a given dimension is to use Generation References, Level References or Parent/Child references. As I mention in the Field Properties section, using Parent/Child references when available is the recommended approach. You can also check the box to process null values (this is really applicable when building your dimension using Generation references)

4. Upon building the dimension, you can sort the members Ascending, Descending or no sort at all. Be careful of this option. If you accidentally clicked on a dimension name, say “Scenario”, but you were not building the Scenario dimension with this rules. Then if you checked the Sort Ascending button, it WILL sort your Scenario dimension even though you did not intend to.

5. This will “Merge” any existing members, plus the new ones coming in or “Remove unspecified” will delete any member that’s not included in the data source being loaded.

Dimension Definition Tab:

The key feature here is that you have the ability change a Hyperion Essbase Block Storage database’s dimension settings, but the one I’ve used the most in practice is changing the Dense/Sparse settings via a load rule.

– Right click on any dimension name (select “Edit Properties”)

– If you dimension is currently Dense, but you want it to be Sparse (say for calculation performance), then you change the “Configuration” setting to “Sparse”. 

– Note: You can add implement this feature on a existing load rule (even if that load rule is building a different dimension) or create a seperate one,  but your data source must contain some sort of data as it can not be blank.

Dataload Settings …..


Tabs for Data Load

1. When performing an Essbase Data Load, you have three data loading (mathematical) operations. For sake of an example, a specified intersection of data equals $10 and your new data is $3. You can “Overwrite existing values” which will turn $10 into $3, you can “Add to existing values” which will turn $10 into $13 and you can “Subtract from existing values” which will turn $10 into $7. Depending on your environment, each one of those can be utilized for different purposes.

2. This section can be tricky. An example i like to use is from an application i built a couple years ago, we have an revenue account coming in from the source with a “-“. Its just the way the source worked, so I flagged all revenue accounts with a UDA of “RevFlip”. By doing this, any dollars that came in for any of these accounts, any negative dollars were flipped to positive and vise versa. So to make it work, you would specify the UDA and the dimension it is associated with.

Tab for Header Definition

3. As I mentioned in the Field Properties section, if a data feed does not have a specific dimension represented, a user could a new column with a given text string (“Misc”). This option will work for you, but it is not clean, instead, use the Header Definition identified here. Simply, add the default member name to the Name box at the top, or you can navigate to it in the and double click on it. When you have more than a couple dimensions not represented from the data source, this is a one stop shop for you to apply all your defaults.

~KKT~

Field Properties


For the Field Properties portion of an Essbase Load Rule, there are three tabs (Global Properties, Data Load Properties, and Dimension Build Properties). Global Properties affect both Data Load and Dimension Build rules, where as Data Load Properties and Dimension Build Properties are specific for each. Get used to this menu option, you will use it quite often.

Global Properties

1. This section deals with whether the data is to be applied in its Original Case, Lowercase or Uppercase. Original is the default. You can add a Prefix (leading text) or a Suffix (ending text). You can Drop lead/trailing spaces. Keep in mind, adding a prefix/suffix with impact all members in a given dimension, not just one specific member. That’s where #2 comes into play.

 2. Instead of changing all members within a given dimension with a prefix/suffix, you can specify one member to be modified with the ‘Replace/With’ section. Be careful with this, if some members names are partial to other member names (East is a part of Northeast), so you don’t want to change all East to “(E) East” because it will turn Northeast into “North(E) East”, which is obviously what you don’t want. Therefore, make sure you check the Match Whole Word option.

 Data Load properties:

3. For your data load rules, you will use this section to specify a columns association to a dimension, you can manually enter the dimension name or, the easier way, double click the dimension name and it will populate the Field Name box. Keep in mind, any dimension that contains a space, quotes will be applied to the dimension name in the Field Name box (this is normal). Also, it is not always mandatory to use the dimension name, you may have 12 data columns which you would specify as “Jan”, “Feb”, “Mar”, “Apr”, etc instead of putting in “Year”.

4. You can specify if a column is a data field or if you would like to ignore this column. You would want to ignore a column when your data is coming in with excess columns that is not necessary for this load rule. The scale option allows you take a number, say 1,000,000 and put a scale of 0.001. This will automatically change you data from 1,000,000 to 1,000. This comes in hand when you have an Millions/Thousands/Dollars member names…simply perform this pseudo calculation on the way in as opposed to changing it in you data source.

5. You will cycle through each column (usually starting from left to right) specifying the dimension it is associated with or whether to “Ignore field during dimension build” (#7). Simply double click the dimension name to apply it to the Dimension field at the top.

 6. For the given column that you are in, you will first select a dimension (see Note 5), then you will select its association to the outline. For instance, you can build your dimension via Generation References, Level References or Parent-Child references. Generations are a top-down approach, where you database name is Generation 0, the dimension names are Generation 1 and so on. Level References are just the opposite, starting from the leaf level (or lowest possible level) and working up from 0 to the database name. Each has its pros and cons based on your data because you may not always have the same amount of generations or levels, so it might be tricky to build your rules file. The recommended approach would be to utilize the Parent-child reference, because no matter how many generations or levels you have, you will always have a Parent-Child relationship. No matter which method you choose, you need to associate a number to it, I’ve already discussed the generation and level numbers you would use, so for parent-child references, please use 0. Keep in mind, in this section you have the ability to not only assigned a member name, but its Alias (description), UDA (user defined attribute – which can be used in security filters and partitions), and its Property (i.e., + for aggregation, ~ to ignore or – for subtraction).

 7. This option simply ignores the column during the dimension build

Step by step guide to create Rule file in Essbase …..


To create an Essbase Rules Files, you have the following options 
– Expand the Application >> Expand the Database >> Right Click on ‘Rules Files’ >> Select ‘Create Rules Files’ (Recommended Approach) 
– Using your toolbar, you can select: File >> New >> Scripts >> Rules Files >> OK (Not Recommended Approach, see note 14 – Validate)

Once the rules file is open, you’re toolbar will appears as below:

1. New/Open/Save/Print/Send Email – Pretty generic here, but if you need help, let me know.

 2. Move Field – Moves any column to a new column (kind of like re-ordering your columns). This option is useful when creating rules that require you to sort columns in order to make for the rule file to load successful. For example, you are creating a dimension build rules file using parent-child references. Say you have 3 columns of data coming in from your source, column 1 = child, column 2 = parent and column 3 = alias. You have two options here, reorder your columns from the source (recommended option, but not always possible) or you can use this option to reorder you columns to Parent, Child, Alias.

3. Split Field – Like most of the titles for these buttons, they do what their titles says. This one will split a given field into two fields. It does NOT split the field based on a delimiter like “|” but rather after so many characters. So if you always want to split the field based on the 3rd character, for instance, then this is your tool, but if you want to split the field based on a “|” then you will need to modify your source data. Keep in mind, you do have the ability to split a data feed by a normal delimiter, but they will be for all columns. See note 11 for more details on adding a delimiter to your data source.

4. Join Field – Joins 2 or more fields into one field (Add more fields by using the holding down the Ctrl key). Please note, when using this option, if you join two fields, they will become 1 field. The original independent columns will no longer exist. If you would to have the independent fields remain intact after the join, see note 5 for “Create a Field Using Join”

 5. Create a Field Using Join – Creates a new field when using a join. Same idea as #4, but when you create a new field using 2 columns, the 2 columns remain intact after the field is created. Both 4 and 5 have their advantages, it just depends on your data and what you are trying to accomplish. If you need the original fields in addition to the new one, use option 5 (Create a Field Using Join), but if you do not need the original fields after the join, use option 4 (Join Field).

 6. Create a Field Using Text – Adds a new column of whatever text you input. I’ve seen beginners use this option add a text field during a data load where a given dimension was missing from the data source. For instance, in the Essbase Sample.Basic database, you perform a data load but you do not have a column for Measures (highly unlikely, but this is just an example). So you can add a columns and add the text “Misc” to load to the Misc member name. The advance way to do this is detailed in Note 12 (Data Load Settings).

7. Field Properties – Please click here  to see a detailed description of the Field Properties option.

 8. Select Record / Reject Records – Within a given column, this will give you the option select or reject certain records. I am not a big fan of these options as it is always better to have the appropriate data coming from the source.

 9. Data Load Fields / Dimension Build Fields – These two options will toggle the view of your load rule (between the Data Load view and Dimension Build View). I recommend when building a data load rule or a dimension build, make sure the right button is toggled appropriately. I’ve see beginners struggle with dimension builds since they were unaware of this option.

 10. Associate Outline – When build a rules file, it is extremely helpful to associate the outline in which you are building a rules file for. This is not necessary, but you will be unable to validate your rules file without performing this operation (see note 14 – Validate, for more details)

 11. Data Source Properties – The Data Source has four tabs:
– Delimiter – As mentioned in Note 3 (Split Field), this works like any other delimiter that splits a file or data source by a specified delimiter
– Field Edits – Lists all edits that have been made to this load rule (held in order of creation). For example, if you have 5 field edits, you can not delete #3 without deleting #’s 4 and 5 (since #’s 4 or 5 might be dependant on #3)
– Header – You can use this option to skip the first line of a data source (usually the header record) or if you data source is set up properly, you can use it as the data load field names or dimension build field name.
– Ignore Tokens – Here you can set up certain tokens to be ignored.

 12. Data Load Settings – Please click here to see a detailed description of the Data Load Settings option.

 13. Dimension Build Settings – Please click here to see a detailed description of the Dimension Build Settings option.

 14. Validate – As mentioned in note 10 (Associate Outline), in order to properly validate a rules file, an outline needs to be associated with the rules file. If you created your rules from the File menu, then Associating an outline is necessary, however, if you navigated Application >> Database >> Rules Files >> Create Rule File, then the outline is already associated.