Calculation Manager 11.1.2.2 Fails to Export Member Block Variable


During a recent project involving Oracle Hyperion Enterprise Performance Management (EPM), we encountered an issue migrating Calculation Manager Rules via Life Cycle Management (LCM). While LCM did a good job of re-creating most of the variables used in the rule on the target, one variable didn’t make it. On further investigation we found that the variable is different from the others. Consider the rule of the format:

FIX ({var1}, {var2})
{var3} (
…………………
)
END FIX

In this case, we found that the variables {var1} and {var2} would get exported from the source just fine, including their properties like limits, prompts, etc. However, {var3} does not seem to get exported.

When we tried exporting the rule via the Calculation Manager “System View” interface, we found that there too it would not export this variable. So the problem is not with LCM, but with Calculation Manager itself. On examining the XML file that is exported, there is a <variables> section that can be found at the beginning of the file. This section lists {var1} & {var2}, but not {var3}. However, {var3} is mentioned in the code block later on.

Oracle Support has been able to re-produce the issue in-house and have registered a BUG for this: Bug 18227617 – MEMBER BLOCK VARIABLE DOES NOT GET IMPORTED USING LCM; NOT SEEN IN VARIABLE LIST.

As a work-around to this, either create the variable manually on the target (a one-time thing if the variable is “Global”), or export/import the variables from Calculation Manager directly, and separately from the rules.

The good news is that this issue does not seem to exist in Calculation Manager 11.1.2.3. I guess just another reason to keep up with upgrade cycles.
Thanks,
~KKT~

ODI- Load Data Directly From The Source To The Target Without Creating Any Temporary Table


It is useful to undestand why and when ODI 10g and ODI 11g is creating temporary tables. The descriptions below applies to standard cases.

C$ Collect tables

A C$_ table is created by an LKM in order to transfer data from a source server to the staging area.

This will allow ODI to join data sets coming from heterogeneous source data servers. For example, if as sources you are using an Excel spreadsheet and an Oracle table to load a Teradata database, ODI will retrieve the content of the Oracle table and the Excel file into 2 collect tables, both hosted on the staging area. This will allow ODI to join the data coming from Excel and Oracle.

If the Integration Interface contains several source Datastores which are joined on the Source, the C$_ table will contain the result of the join operation. If the join is set to be executed on the staging area, ODI will create 2 distinct C$_ tables.

Notice that when the source Datastores are on the same server as the target Datastore or same server as the staging area, there is no LKM to set at the Integration Interface level.

I$ Integration tables

Most of the time, the I$_ tables are created by the IKMs whenever there is a need for updating a target table or running a flow control.

If this is not required, the creation of the I$_ table is of no use since ODI can retrieve the data directly from the collect tables. The structure of the I$_ table is to have an almost identical structure as the target table in order to contain the data that ODI is about to insert or to use to update the target table.

E$ Error tables

From an Integration Interface, E$_ tables are created and populated by the CKM if the IKM option called Flow Control is set to yes.

So, only under the following conditions:

Only one source Datastore, not journalized
Loading strategy = Truncate/Insert or basic Insert (no updates or deletes)
No flow nor static control is required
… are the C$, I$, E$ tables of no use, and in such a case, it would be more efficient to directly load data into the target table.

Solution 1: Use the existing “IKM SQL to SQL Append”

The simplest solution is to set the Staging Area to the same Logical Schema as the one that hosts the Source Datastore Model (even if the source Datastore is present on a technology that does not offer any transformation engine, (for example, File or Excel Technologies).

At the Flow tab level, click on the Target and select the “IKM SQL to SQL Append”.

Even if this solution works very well it is not the most efficient one in terms of execution time.
Solution 2: Write your own IKM, using database load utilities (Oracle SQL*Loader, Microsoft SQLServer BCP…)

When dealing with Flat Files for example, using a database loader utility such as SQLLDR is more efficient.

In this example, using the “LKM File to Oracle (SQLLDR)” is not a solution because the LKM will create a C$_ table.

Unfortunately, there does not exist any Integration Knowledge Module (IKM) that uses SQLLDR.

This is a typical situation where a customer specific Knowledge Module is required.

The example below illustrates how to create such an IKM, from the “IKM SQL to SQL Append”.
1. Duplicate the “IKM SQL to SQL Append” and rename the copy as “IKM File to SQL Append (SQLLDR)”

2. Edit this news IKM and from the “Details” tab, remove the following command lines and then click on Apply to validate the updates:
Lock journalized table
Cleanup journalized table

Those two KM lines are of no use as we are dealing with flat files.

3. Right click on the “IKM File to SQL Append (SQLLDR)” from the Knowledge Module tree and add 3 new options:
LOA_DIRECT
LOA_DISCARDMAX
LOA_ERRORS

The definition of those new options should be strictly identical to the options having the same name under the “LKM File to Oracle (SQLLDR)”.

When creating the new options do not set any value in the “Position” field, it will automatically be filled by ODI.

4. Edit the “IKM File to SQL Append (SQLLDR)” and add a new KM line called “Generate CTL file”.

From the “Command on Target” tab, set the technology to ODI Tools.

Do not close the window.

5.Right click on the “LKM File to Oracle (SQLLDR)”, select “Open in a new window” and go to the Details tab.

6.Right click on the command line called “Generate CTL file” and select the code written in the “Command” field of the “Command on Target” Tab.

7. Copy the code and paste it into the “Command” field of the “Command on Target” Tab of the new KM line created during step 4 and click on Apply.

8. Launch the expression editor and search for the following section of code:
INTO TABLE <%=snpRef.getTable(“L”, “COLL_NAME”, “W”)%>
(
<%=snpRef.getColList(“”, “[CX_COL_NAME]\tPOSITION([FILE_POS]:[FILE_END_POS])”, “,\n\t”, “”,””)%>
)
and replace it with:
INTO TABLE <%=snpRef.getTable(“L”, “TARG_NAME”, “A”)%>
(
<%=snpRef.getColList(“”, “[COL_NAME]\tPOSITION([FILE_POS]:[FILE_END_POS])”, “,\n\t”, “”,””)%>
)
9. Search for this other piece of code:
INTO TABLE <%=snpRef.getTable(“L”, “COLL_NAME”, “W”)%>
FIELDS TERMINATED BY X'<%=snpRef.getSrcTablesList(“”, “[FILE_SEP_FIELD]”, “”, “”)%>’
(
<%=snpRef.getColList(“”, “[CX_COL_NAME]”, “,\n\t”, “”,””)%>
)
and replace it with:
INTO TABLE <%=snpRef.getTable(“L”, “TARG_NAME”, “A”)%>
FIELDS TERMINATED BY X'<%=snpRef.getSrcTablesList(“”, “[FILE_SEP_FIELD]”, “”, “”)%>’
(
<%=snpRef.getColList(“”, “[COL_NAME]”, “,\n\t”, “”,””)%>
)
10. Click on Apply at the Expression Editor level and KM line level.

11. Move up the new KM line right after the one called “Delete target table”.

12. Add a new KM line called “Call sqlldr via Jython”. From the “Command on Target” tab, set the technology to Jython. The command field of the “Command on Target” tab should have strictly the same content as the one of the same step of the “LKM File to Oracle (SQLLDR)”.

13. Click on Apply at both the new KM line and KM level.

14. Override the existing Description and Restriction of the “IKM File to SQL Append (SQLLDR)” with the ones of the “LKM File to Oracle (SQLLDR)”.

15. To use the IKM in an Integration Interface, set the Staging Area to the same Logical Schema as the one that host the source Model Datastore.
Thanks,
~KKT~

ODI with HFM- How to configure


How to install and configure HFM client, so that ODI can make a successful connection to HFM applications.

1. Install the HFM ADM Driver using EPM System Installer:

1_EPM_Installer
2.Configure DCOM using the EPM System Configurator
a. Select the EPM Oracle Instance to which the configuration would be applied
b. Select HFM Configure DCOM
c. Configure DCOM

2_EMP_Config
3. Copy the CASSecurity.exe from
<EOH>\products\FinancialManagement\Common
to
<EOH>\products\FinancialManagement\Client

Thanks,
~KKT~

Essbase – What Does the Dynamic Calculator Cache Do (DYNCALCCACHEMAXSIZE )


Hyperion Essbase uses a separate Dynamic Calculator Cache for each open database. The single DYNCALCCACHEMAXSIZE setting in the configuration file, ESSBASE.CFG, specifies the maximum size for each dynamic calculator cache on the server. By default, the maximum size is 20MB. Hyperion Essbase allocates area in a dynamic calculator cache for data blocks, as needed, until it has allocated the maximum memory area specified by this setting.

Hyperion Essbase writes two messages to the application event log for each data retrieval. As shown in the example, the first message describes the total amount of time required for the retrieval.

Local/Sample/Basic/admin/Info(1020055)
Spreadsheet Extractor Elapsed Time : [0] seconds

Local/Sample/Basic/admin/Info(1020082)
Spreadsheet Extractor Big Block Allocs — Dyn.Calc.Cache : [3] non-Dyn.Calc.Cache : [0]
If a dynamic calculator cache is used, a second message displays the number of blocks calculated within the data calculator cache (Dyn.Calc.Cache: (n)) and the number of blocks calculated in memory outside dynamic calculator cache (non-Dyn.Calc.Cache: (n)).

To determine if the dynamic calculator cache is being used effectively, review both of these messages and consider what your settings are in the ESSBASE.CFG file. For example, if the message indicates that blocks were calculated outside as well as in a dynamic calculator cache, you may need to increase the DYNCALCCACHEMAXSIZE setting. If the specified maximum size is all that you can afford for all dynamic calculator caches on the server and if using regular memory to complete dynamically calculated retrievals results in unacceptable delays (for example, because of swapping or paging activity), set DYNCALCCACHEWAITFORBLK to TRUE.

Thanks,
~KKT~

Essbase – Trace MDX query running in Essbase database


MDX queries run against an Essbase application can be tracked by adding a line in the Essbase configuration file, \Oracle\Middleware\user_projects\epmsystem1\EssbaseServer\essbaseserver1\bin\essbase.cfg:

TRACE_MDX appname dbname 2
The appname and dbname refer to the specific application and database that you want to track. Both are optional. Omit them if you want all applications tracked.

The MDX queries are dumped to a file named mdxtrace.log in ARBORPATH/appname/dbname folder, i.e. /Oracle/Middleware/user_projects/epmsystem1/EssbaseServer/essbaseserver1/app/Sample/Basic. The log contains:

The time stamp at the time of completion of the query.
The elapsed time for the query
For example, executing an MDX query against the Sample/Basic database, the mdxtrace.log contains:

===============================================================
Following MDX query executed at Mon Apr 08 08:56:13 2013
===============================================================
SELECT
{[100-10], [100-20]} ON COLUMNS,
{[Qtr1], [Qtr2], [Qtr3], [Qtr4]} ON ROWS
FROM Sample.Basic

=== MDX Query Elapsed Time : [0.009] seconds ===================

Note :- Only queries that were executed successfully are printed to file. Queries that end in error are not printed.

Thanks,
~KKT~

Essbase 11.1.2.3.x Calculations/Business Rules Performance Degradation


Calc scripts and business rules are running slow compared to the Essbase v11.1.1.3 release. They are running 50-60% slower.
The Database is set to use Direct I/O vs. Buffered I/O. All performance tuning that has been done in 11.1.2.3.x is with Buffered I/O.

Set the database to use Buffered I/O instead of Direct I/O. In Essbase Adminstration Services Console:

1. Expand to the database.

2. Right-click on the database name and Edit->Properties.

3. On the Storage Tab, change the Pending IO Access Mode to Buffered I/O.

4. Stop/Start the application.

5. Run a restructure on the database.

Thanks,
~KKT~

Essbase – Set the Essbase Administration Services (EAS) Session Timeout Interval


The session timeout can be set in the web.xml file where the Essbase Administration Services Server component is installed.

For example, if EAS is deployed with WebLogic, the file is located in:

C:\Hyperion\AnalyticAdministrationServices\AppServer\InstalledApps\WebLogic\8.1\aasDomain\servers\aas\webapps\eas\WEB-INF\web.xml.

Modify the setting in the section:

<session-config>
<session-timeout>1440</session-timeout>
</session-config>

Thanks,
~KKT~

Essbase- Languages Supported by UTF-8 Compliant Aggregate Storage (ASO) Applications


What languages are supported by ASO non-Unicode Applications in a UTF-8 compliant system?

Version 9.3.3 is not localized, it only supports the English language. Version 11.1.1.2 and beyond are localized. For more detailed information please refer to the section called ‘Supported Languages’ in each release’s readme.

Which languages are supported by BSO Unicode applications in UTF-8 compliant systems?

If an application is created as a Unicode application, it can support English, French, German and Hungarian characters in the same database.

When the system is UTF-8 compliant, but the ASO application is non-Unicode, what language character set can be used to load alias tables?

For non-Unicode applications, valid characters are limited to ESSLANG settings.

Thanks,
~KKT~

Essbase – Move BSO Data to a New Location


To move the location where the Essbase data is stored:

1. From the Essbase Administration Services Console EAS), connect to Essbase.
2. Expand to the database you want to change.
3. Right-click and Edit->Properties. On the ‘Storage’ tab, set the location for the .pag and .ind files to span to.
4. Stop and start the application for the new drive locations to be set.
5. Force a dense restructure using one of the following methods:

Right-click on the database and select ‘Restructure’
On a member that has data from a dense dimension by changing a member name, or moving or adding a new dense dimension member to the outline.
Export data, clear data and reload exported data.
This forces the .pag and .ind files to be moved to a new drive.

Thanks,
~KKT~

Essbase – Kill the Essbase Processes/Sessions in ‘Terminating’ Status


Run the following two MaxL commands to end the sessions:
                               alter system logout session all force;
                              alter system kill request all;
If the above two MaxL commands do not end the session, the only way to get rid of them is to stop/start Essbase.

Use the following MaxL commands to shutdown Essbase:

                            alter system unload application all;
                           alter system shutdown;
These commands will cleanly unload applications currently in memory. If there is a hung process on one of the applications, Essbase will not shutdown and you will need to kill the hung ESSSVR process. See Document 583519.1 for information in killing ESSSVR processes.

If these sessions are not causing an issue, they should be cleared on the next scheduled recycle of Essbase.
Thanks,
~KKT~