FDMEE – Export the ODI Operator Log from the ODI Console


I am frequently getting this query for Below solution

This is a simple procedure that can be carried out by anyone who has access to the ODI console and knows the username and password for the ODI Console.

  1. Logon to the ODI Console by going to http://server:19000/odiconsole in a web browser (where server is the server where FDMEE is installed).
  2. Expand ‘Runtime’, then expand ‘Sessions/Load Plan Executions’ and you should be able to see the ODI sessions that have been run.
  3. Click on ‘Sessions/Load Plan Executions’ & then click the ‘Export’ icon (a small white square with a blue arrow pointing upwards on the ‘Browse’ tab).

Show the Operator logs

4. In the ‘Export Log’ screen, select a date and time for the ‘From’ and ‘To’ values that match when the FDMEE process started and finished (so that only a relevant subset of the logs are exported).

Set date filters for the export

5. Click the export button and you will get logs.



FDMEE Automated Batch Loads Time Out

In Financial Data Quality Management Enterprise Edition (FDMEE) automated batch loads time out.

There are two most likely causes of this issue:

  • Batch timeout setting not defined or insufficient in system and/or application settings in the FDMEE application
  • Stuck thread timeout setting needs increasing in the ErpIntegrator0 managed server (weblogic)

To increase the batch timeout setting in the FDMEE application:

  1. Navigate to Data Management from EPM workspace.
  2. From the Setup tab, and then under Configure, select System Settings.
  3. In System Settings, from Profile Type, select Other.
  4. Define a timeout value for the “Batch timeout in minutes” setting.

To increase the Stuck Thread timeout settting in the ErpIntegrator0 managed server in weblogic:

  1. Navigate to weblogic admin console.
  2. Under domain structure, expand Environment and choose servers.
  3. Choose ErpIntegrator0 from the servers list.
  4. Under configuration, choose tuning.
  5. Choose Lock & Edit to modify the setting value.
  6. Increase the Stuck Thread Max time value. The value is in seconds. By default it will be 600 seconds (5 minutes).
  7. Choose activate configuration so that the settings take effect.
  8. Restart FDMEE service.

Hope this helps.



FDQM/EE Loading Metadata to on-premise and cloud EPM Applications – Part-2

Import Format

The import format will map the columns from the source file to the dimensions in the custom application, or set defaults for properties that are the same for every member. In other cases, some of the members are set to the text “BLANK” so that mapping will succeed, but then values for the BLANK members may or may not be set by the by the event script. The use of the text for BLANK is specifically required when using the output dat file to generate the metadata file. If pulling directly from TDATASEG the user may not need to use this method.

Please also note that the AMOUNT field should be set to any valid numeric amount so that the input file will be successfully processed. In addition, a SKIP row may be required if the input file includes a header record.
For this example, the Import Format is defined as follows:

Event Scripts
In this example, an AftExportToDat and BefLoad script were used to process the file and then push the metadata to HFM. It’s possible to perform everything needed in a single script if desired by the user, and alternate implementations are possible. The script details are as follows:

AftExportToDat – Generate the metadata file in the format required by HFM

import shutil
#Open the .dat file generated by FDMEE
datFileName = str(fdmContext[“OUTBOXDIR”].replace(“\\”,”/”)) + “/” + str(fdmContext[“TARGETAPPNAME”]) + “_” + str(fdmContext[“LOADID”]) + “.dat”
datFile = open(datFileName,”r”)
#Open the .app file to be created for HFM
appFileName = str(fdmContext[“OUTBOXDIR”].replace(“\\”,”/”)) + “/” + str(fdmContext[“TARGETAPPNAME”]) + “_” + str(fdmContext[“LOADID”]) + “.app”
appFile = open(appFileName,”w”)
#Create .app file Header lines
appFile.write(“!MEMBERS=Entity\n’Label;DefCurrency;AllowAdjs;IsICP;AllowAdjFromChildren;SecurityClass;UserDefined1;UserDefined2;UserDefined3;HoldingCompany;SecurityAsPartner;Default Parent;Description(s)\n”)
#Include a counter to skip record in the .dat file generated by FDMEE
i = 0
#Parse the records in .dat file & replace BLANKs with empty string. (FDMEE puts BLANK for empty columns by default & might be rejected by target EPM application.)
for memberLine in datFile:
i = i + 1
if i > 1:
memberLine = memberLine.replace(“BLANK”,””)
memberLine = memberLine.split(“,”)
#Derive the Default Parent
if memberLine[15] != “”:
DefParent = memberLine[14]
elif memberLine[14] != “”:
DefParent = memberLine[13]
DefParent = ‘#root’
#Write the .app file record for members using the below format.
appFile.write(“%s;%s;%s;%s;%s;%s;%s;%s;%s;%s;%s;DefaultParent=%s;English=%s\n” % (memberLine[0],memberLine[1],memberLine[2],memberLine[3],memberLine[4],memberLine[5],memberLine[6],memberLine[7],memberLine[8],memberLine[9],memberLine[10],DefParent,memberLine[12]))
#Create another header record in the .app file for the hierarchy section.
#Close the .dat file
#Open the .dat file again for parsing the records for hierarchy section.
datFile = open(datFileName,”r”)
#Write a logic such that hierarchy sections are not repeated in the .app file.
j = 0
seen = set()
for hierarchyLine in datFile:
j = j + 1
if j > 1:
hierarchyLine = hierarchyLine.replace(“BLANK”,””)
hierarchyLine = hierarchyLine.split(“,”)
gen1 = “;%s\n” % (hierarchyLine[13])
gen2 = “%s;%s\n” % (hierarchyLine[13],hierarchyLine[14])
gen3 = “%s;%s\n” % (hierarchyLine[14],hierarchyLine[15])
if gen1 not in seen and hierarchyLine[13] != “”:
if gen2 not in seen and hierarchyLine[14] != “”:
if gen3 not in seen and hierarchyLine[15] != “”:
#close the .app file & .dat file.

BefLoad – Load the file to HFM

#Import required HFM & Java libraries
from java.util import Locale
from java.io import File
import shutil
from oracle.epm.fm.hssservice import HSSUtilManager
from oracle.epm.fm.domainobject.application import SessionOM
from oracle.epm.fm.domainobject.loadextract import LoadExtractOM
from oracle.epm.fm.common.datatype import transport
#HSS Username / Password
UserName = “admin”
Password = “Password”
#Target HFM connection properties
clusterName = “HFMCluster”
application = “DEVHFM”
#Authenticate user & create HFM Session
ssoToken = HSSUtilManager.getSecurityManager().authenticateUser(UserName,Password)
hfmsession = SessionOM().createSession(ssoToken, Locale.ENGLISH,clusterName,application)
#Load members to HFM
loadOM = LoadExtractOM(hfmsession)
appFileName = str(fdmContext[“OUTBOXDIR”].replace(“\\”,”/”)) + “/” + str(fdmContext[“TARGETAPPNAME”]) + “_” + str(fdmContext[“LOADID”]) + “.app”
metadata_file = File(appFileName)
memberLoadOptions = transport.MetadataLoadOptions()
setCustomDims = [False,False,False,False]
loadInfo = loadOM.loadMetadata(metadata_file, memberLoadOptions)
logFile = loadInfo.getLogFile()
logFileName = str(fdmContext[“OUTBOXDIR”].replace(“\\”,”/”)) + “/” + str(fdmContext[“TARGETAPPNAME”]) + “_” + str(fdmContext[“LOADID”]) + “.log”
fdmAPI.showCustomMessage(“Metadata Loaded Successfully”)

Continue – Part-3

FDQM/EE Loading Metadata to on-premise and cloud EPM Applications – Part-1

On-premise FDMEE will be used as the tool that facilitates loading metadata to the target EPM application. The high level process to use on-premise FDMEE to load metadata is
as follows:

1) Create a target custom application in FDMEE that includes a column for each of the attributes that the user wishes to load. Some of these attributes will be in the source file, and some may be defaulted during the import process. For example, the source file may contain the member name, description, parent, IsICP flag, SecurityClass, ValidForPlanType, etc.

2) Define an import format in FDMEE to map the columns in the source file to the dimensions in the target custom application. For attributes not in the source file, define an optional default in the import format.

3) Enter any necessary mapping rules for any required transformations.

4) When running the integration with FDMEE, the system will create a .dat file in the outbox directory with all the columns specified in the custom Application definition when the Export button is clicked. This file cannot be directly loaded into the target EPM application and requires further processing to match the required format of the target EPM application.

5) Create an AftExportToDat event script that will be used to format the data loaded to FDMEE into the file format required to load data to the target application. An additional script (BefLoad) will also be used to call the necessary API to load the file to the desired target application. In some cases file headers, section headers, etc. may be required as part of the metadata load file.

Please note that there is more than one way to generate a metadata file from FDMEE that can be loaded to a target application. Some users prefer to load data to TDATASEG, and then query TDATASEG to select the necessary data. Other users will scan the system generated file from the export process to generate the file to load rather than selecting data from the TDATASEG table.

Both methods are valid, and it is really the user’s preference to select the method they desire. Once these artifacts are defined in FDMEE, users can begin to load metadata from a file using FDMEE. It is assumed that the reader of this document is familiar with FDMEE, and Jython scripting.

Custom Application Setup
The first step is to create a custom application in FDMEE that can be used as a dummy target for setting up the FDMEE artifacts. The custom application will contain a column for each attribute that the user intends to load as part of the metadata import process. In addition, the customer can also add description and attribute columns to the import format to extend the number of attributes beyond what is defined in the custom application. Please note that the customer is constrained by the TDATASEG table definition in regards to the number of columns that can be used to support the desired set of attributes. The set of columns that can be used includes Account, Entity, ICP, UD1-20, ATTR1-14, DESC1-2 which is a total of 39 columns that can be used to define the metadata that is desired to be loaded. (DATAX is not used because custom applications only support numeric data loads.)

Import Format
The import format is used to map the data from the input file to the dimensions in the target application, and is also a way to set defaults for attributes that are not included in the source file. As of the date of publication, custom applications only allow loading of numeric data, and the user will need to define the source file type as “Delimited – Numeric Data”, and provide a number as a default in the expression field for the amount. This does not impact the metadata load process, but is required to successfully load the file to FDMEE as non-numeric amounts are rejected by FDMEE while performing a numeric data load.

Additional FDMEE Setup
Users will perform the rest of the required setup like a regular data load process and define the required periods, categories, mapping rules and data load rules. Artifacts like periods and categories are not required to load metadata, but are required in FDMEE to successfully process a data load file and will need to be setup. A generic category/period mapping like Metadata / Metadata_Period for example can be created for this purpose.

Event Script
An event script is used to either extract data from the TDATASEG table and reformat into a text file, or process the data file that is generated from the custom application export process. In addition to reformatting the data into the correct format for the target application, the event script must also call the appropriate API or command for the target application. For HFM, the script needs to call the HFM Java API. For on-premise planning, the script must call the outline load utility or the REST API ( version). For the cloud, the script must call the REST API
or EPMAutomate.

Load to on-premise Financial Management (HFM)
For this blog , we are assuming that the user is loading metadata to HFM version, and will be using the Java API to load the metadata file. The unique aspect about loading metadata to HFM is that the metadata load file includes multiple sections, and the event script needed to output the metadata and load the file will need to take these different sections into consideration. This
example will show how to load a data file in a specific format to an HFM application. It is possible that users may need to load a metadata file with additional sections not included in this example, and the user should reference the HFM administration guide to understand the format required for any additional information.

Please note that the process to generate the metadata file to load to HFM is as follows:

1. Import file into FDMEE.
2. Perform mapping.
3. Process *.dat file from the export step to generate member data and hierarchy data. The output file will be read one time to generate the individual member data, and one time to generate the hierarchy section of the file.
4. Any logic to determine the “Default Parent” needs to defined accordingly.

Source File
For this example, the following file was used:

Asia,Asian Countries,N,OPWP_90,World,Asia,
Europe,European Countries,N,OPWP_90,World,Europe,
North America,Northern America,N,OPWP_90,World,North America,
United Kingdom,England Wales & Scotland,Y,OPWP_90,World,Europe,United Kingdom
Canada,Canada,Y,OPWP_90,World,North America,Canada

The input file includes a header row which documents the layout of the file.Loading metadata to HFM also requires a hierarchy, and this data is included in the metadata file in the last 3 positions labeled as HFMLevel1, HFMLevel2 and HFMLevel3.
The intended member output for the load file is defined as the following:


Some of the output is not included in the input file, and these items will be added as defaults from the import format, or as part of the event script that generates the output file.

Custom Application Definition
The custom application used in this example uses the following dimensions:

Dimension Name Dimension Class Data Table Column
AllowAdjFromChildren Generic UD5
AllowAdjs Generic UD3
DefaultParent Generic UD12
DefCurrency Generic UD2
Description Generic UD13
HoldingCompany Generic UD10
IsICP Generic UD4
Level1 Generic UD14
Level2 Generic UD15
Level3 Generic UD16
MemberName Generic ACCOUNT
SecurityAsPartner Generic UD11
SecurityClass Generic UD6
UserDefined1 Generic UD7
UserDefined2 Generic UD8
UserDefined3 Generic UD9

Please note that this is just an example, and the custom application used for this process can use other dimensions as required – you are not locked into this definition. Below is a partial screen shot of the custom target application definition in FDMEE. (Please note that ACCOUNT is a required dimension in a custom application.)


Planning Forms Error: “Because of inactivity, your session has timed out and is no longer Active Click Ok to reload the page.”

When user has multiple planning sessions of the application is opened they encounter the error below:

Because of inactivity, your session has timed out and is no longer Active Click Ok to reload the page.

This is a known behavior.

ADF maintains a fixed set of session tokens on client side and if user launches multiple pop-ups in app, they will run into this error.

This should not cause any functional loss – user can click OK on message pop-up and it should re-load the page.

To get past the error message you can bump up CLIENT_STATE_MAX_TOKENS  modify web.xml within HyperionPlanning.ear


Please perform the below mentioned action plan  ,

1) Navigate to C:\Oracle\Middleware\EPMSystem11R1\products/Planning/AppServer/InstallableApps/Common/) here you will find a HyperionPlanning.ear file

2) Using a 7zip utility open this archive and navigate to \HyperionPlanning.war\WEB-INF\

the patch in the 7zip will look like “C:\Oracle\Middleware\EPMSystem11R1\products\Planning\AppServer\InstallableApps\Common\HyperionPlanning.ear\HyperionPlanning.war\WEB-INF\”

3) in this location you will find a Web.xml file

4) Apply the solution Edit Web.xml

Look for below parameter

Change it to


Save the changes

5) When you exit the 7zip it will say the archive is modified and will ask to save the changes please save the changes

6) Delete existing weblogic deployment /tmp folder for Planning, re-starting services and checking the behavior.

Please ensure to try this on either QA/Dev/Test environment first.

Please note setting property value to a high value will result in caching of too many session tokens on client side and they need to evaluate performance impact



Hyperion Planning Utilities Syntax – Reference Guide


As we all know there are different utilities available in Hyperion planning. In this post i tried to cover some of them.

  • ExportSecurity
  • ImportSecurity
  • FormDefUtil
  • ProvisionUsers
  • UpdateUsers/UpdateNativeDir

By default, Planning utilities are installed in the EPM_ORACLE_INSTANCE/Planning/planning1 directory.


The ExportSecurity utility exports Planning access permissions to the SecFile.txt file, enabling you to export and import access permissions across applications.

  • If you specify only mandatory (not optional) parameters, all access permissions to all artifacts for all users and groups are exported. You can limit the export by specifying a member parameter (but only one member-based parameter).
  • You can specify the optional parameters in any order.
  • You can use only /S_USER or /S_GROUP, not both.
  • Use the /S=searchCriteria parameter to specify users and groups with the same name.
  • Running the utility creates a file named SecFile.txt, which contains the exported access permissions.


ExportSecurity [-f:passwordFile] /A=appname,/U=username, [/S=searchCriteria|/S_USER=user|/S_GROUP=group], [/S_MEMBER=memberName|/S_MEMBER_ID=memberName |/S_MEMBER_D=memberName|/S_MEMBER_IC=memberName|/S_MEMBER_C=memberName],[/DELIM=delim] , [/DEBUG=true|false],[/TO_FILE=fileName],[/HELP=Y]

For example, to export access permissions for a user and group named Sales, enter:

ExportSecurity /A=app1,/U=admin,/S=Sales

The exported file will have same format as SecFile.txt example in the ImportSecurity section.


The ImportSecurity utility loads access permissions for users or groups from a text file into Planning.

  • All users, groups, and artifacts must be defined in the application.
  • Before importing access permissions on a user-defined custom dimension, you must allow access permissions to be set on it by selecting Apply Security.
  • You must name the text file SecFile.txt and save it in the planning1 directory.
  • Each line in the SecFile.txt file must specify access permissions information.

Example SecFile.txt:

North America,Account101,write,@ICHILDREN


ImportSecurity [-f:passwordFile] “appname,username,[delimiter],[RUN_SILENT],[SL_CLEARALL]”

(-f passwordFile is optional)

For example:

ImportSecurity “app1,admin,SL_TAB,1”.

Use the FormDefUtil.cmd utility to move data form definitions from one Planning application to another.


formdefutil [-f:passwordFile]import|exportfilename|formname|-all server name user name application

To import one file:

FormDefUtil.cmd import c:\EPM_ORACLE_INSTANCE\Planning\planning1\form1.xml localhost admin APP1

To export one file:

FormDefUtil.cmd export Form1 localhost admin APP1

To export all data form definitions:

FormDefUtil.cmd export -all localhost admin APP1

To import all data form definitions:

FormDefUtil.cmd import -all localhost admin APP1.

The ProvisionUsers utility synchronizes Planning users, groups, and roles in Shared Services Console with a Planning application and with Essbase.


ProvisionUsers [-f:passwordFile] /ADMIN:adminName /A:appName [/U:user1[;user2;user3]] [/R:n]

To Synchronize all users in the App1 application:

ProvisionUsers /ADMIN:admin /A:App1

To Synchronize user Planner1 in the App2 application every 60 minutes:

ProvisionUsers /ADMIN:admin /A:App2 /U:Planner1 /R:60.

If you move users and groups, their identities (SIDs) change, unless you synchronize their identities between Planning and Shared Services, their assigned access permissions are lost. This command is used in concert with the UpdateNativeDir Utility. This utility and its documentation can be found in the EPM_ORACLE_INSTANCE/common/utilities/UpdateNativeDir directory. You use two utilities in concert to synchronize changes.

It is important to run the utilities in this order.
  1. Run the UpdateNativeDir utility to update user and group identities in Shared Services (refer to the documentation on usage).
  2. Run the UpdateUsers.cmd utility to update the SIDs in Planning with the changes in Shared Services.


UpdateUsers [-f:passwordFile] serverName adminName applicationName


UpdateUsers.cmd ABCserver admin Planningapp.

If your application formerly used Business Rules, but now Calculation Manager is the selected calculation module for your application, administrators can migrate launch access permissions on business rules and their projects from Business Rules to Calculation Manager business rules in Planning using this command.

  • Overwrites launch access permissions that are already assigned to business rules in the specified Planning application.
  • Migrates access permissions only for users and groups that are provisioned for the specified Planning application in Shared Services Console.

Before running HBRMigrateSecurity.cmd:

  • If you are using Classic application administration, ensure that Calculation Manager is the selected calculation module for the application (see “Specifying System Settings” on page 231). If you upgraded the application to Performance Management Architect, then Calculation Manager is automatically selected as the calculation module.
  • Migrate business rules from Business Rules to Calculation Manager. See the Calculation Manager Designer’s Guide.
  • Deploy the business rules to Planning.


HBRMigrateSecurity.cmd /A:appname /U:admin /F:C:\temp \HBRExportedSecurity.xml.


Administrators can use TaskListDefUtil.cmd (Windows) or TaskListDefUtil.sh (UNIX) to move task list definitions between Planning applications. You can export or import task list definitions to or from an XML file. When you export task list definitions, the utility creates an XML file in the current directory and logs errors in TaskListDefUtil.log in the EPM_ORACLE_INSTANCE/diagnostics/logs/ planning directory. You can copy the utility to any directory and launch it from there to save files to another directory.

TaskListDefUtil [-f:passwordFile] import|exportFILE_NAME|TASK_LIST_NAME|-all SERVER_NAME USER_NAME APPLICATION

To import one file:

TaskListDefUtil.cmd import c:\EPM_ORACLE_INSTANCE\Planning\planning1\TaskList1.xml localhost admin APP1

To export one file:

TaskListDefUtil.cmd export TaskList1 localhost admin APP1

To export all task list definitions:

TaskListDefUtil.cmd export -all localhost admin APP1

To import all task list definitions:

TaskListDefUtil.cmd import -all localhost admin APP1

Hyperion Planning Data Form Design – Things to keep in mind


while designing data forms below are some of the steps to keep in mind in HPB while development or this may lead to performance issue as well as Data retrieval.

Recommended Design

  • Keep dense dimensions in rows and columns.
  • Place sparse dimensions in the Page and Point of View (POV).
  • Place static dimensions in POV and hide these dimensions where not relevant to the form.
  • Place Scenario, Version, and Year dimensions in the Page wherever possible.
  • Use dynamic user variables and substitution variables as much as possible.
  • Use Run on Save and Run on Load for business rules on data forms only where business rules can complete execution within a short timeframe (say less than 30 sec.). All other business rules should be set to launch manually.
  • For Planning release 9.3.1 and later, set long-running business rules to run in the background. See the Hyperion Planning Administrator’s Guide for details on these settings.
  • Limit composite data forms to two data forms where possible
  • Use the Suppress Missing Data option to skip #MISSING values from resultant data forms.
  • Split single larger data forms into multiple smaller data forms with fewer rows and columns.
  • Minimize using account annotations on data forms.
  • For Planning release 9.3 and later, enable the Mass Allocate feature on data forms only where absolutely necessary. This feature runs calculation scripts that can impact data values at intersections to which the end user may not have access.


Optimal Design Example

Rows: Account
Columns: Time Period
Page/POV: Entity and other dimensions.

Sub-optimal Design Example

Rows: Entity
Columns: Year
Page/POV: Account, Time Period, and other dimensions.

Performance Considerations

  • The Run on Save and Run on Load calculation options place an additional demand on resources within the Essbase server for each save or load operation performed by end users. If it is necessary to use Run on Save or Run on Load, runtime prompts should be used to restrict the scope of the calculation and minimize the impact on users of the Essbase server.
  • Review the Hyperion Planning Administrator’s Guide for steps that detail the execution of large calculations in batch mode. For example, you can set Planning properties to have business rules switch to background processing after a threshold that you configure. In addition, you can run business rules in batch mode using the options under Tools > Business Rules.
  • When users access data form members that are dynamically calculated or have member formulas, an additional load is placed on the Essbase server. The impact is more acute during heavier user load.
  • The biggest impact on data form performance is the grid size. Grid size consists of the number of rows, multiplied by the number of columns. The grid size doubles if an application uses multiple currencies.
  • Adjust the number of cells retrieved to the memory on end users’ client machines. To determine the number of cells, multiply the number of rows by the number of columns.
  • The Suppress Missing Blocks option for rows allows placing large sparse dimensions in rows, while providing good response time if the density of the query is low. Only blocks with data are retrieved. For example, when using this option, you can place an employee dimension consisting of thousands of members in the rows, and place the entity in the page or POV. Then, only employees of the selected entity are retrieved.
  • Using Suppress Missing Data can improve performance. Before using this feature however it is recommended that you test the impact on performance.
  • Using account annotations impacts performance. Use this option only if account annotations are required.
  • If the Supporting Detail Detection Cache reaches 75% or higher, the cache size should be increased.
  • Enabling shared member security impacts performance. Use this option only if you want the base member’s security to be based on its own security and that of all of its shared members. If this option is not enabled, users with access to the base member still have access to all of its shared members.
  • The administrator should define data forms using dynamic user variables, to narrow the data form display to the dimensionality required by users. End users can set the value for the user variable in preferences. Review the Hyperion Planning Administrator’s Guide for these settings.
  • For areas with low bandwidth, it is recommended that users access Planning data forms using Smart View for Office for faster response time.
  • When Planning is first loaded, the first few requests to Planning can take longer because caches might not be loaded. It is recommended that an administrator or power user pre-load the most commonly used data forms before the general community uses the Planning server after each reboot to the Planning server.
  • It is highly recommended that administrators conduct performance tests on data forms to ensure that they meet user expectations. Data forms should be tested in both single and multi-user environments before they are deployed to production.
  • Data form definitions are cached when users log on to a Planning application. Because one cache is created for data form definitions, memory usage is not affected by the number of users viewing data forms. However, memory usage goes up if multiple users enter data in data forms at the same time.


Data Form Size Estimation

To get a rough estimate of data form size,  open the data form and select File > Save As from the browser. The size of the .HTML file is the portion of the data form that changes based on grid size. The .JS files remain the same size and can be cached, depending on browser settings. Information such as data form definitions, pages, and .gif files are not compressed when data forms are opened and sent to the Web browser.

About Suppression

Planning follows this basic sequence to suppress information in data forms, depending on suppression settings for Suppress Missing Blocks and Suppress Missing Data.

1. First, Planning evaluates the data form definition and creates a grid to send to Essbase.
2. If Suppress Missing Blocks is selected for data form rows:

a. Planning queries Essbase to determine which members in the data form definition have data blocks. This query typically takes only a few milliseconds. (This setting is most effective for sparse dimensions, and should not be used for dense dimensions.)
b. Planning then determines which members have data blocks available in Essbase, and filters out members for which the user does not have access permissions.

3. Next, Planning constructs a grid, and sends information to Essbase to fill in the data. (The constructed grid is generally very small, so the result is returned quickly from Essbase.)
4. If Suppress Missing Data is selected, Planning suppresses data for any #MISSING data element. (This operation generally occurs quickly. However, if a large volume of data is set to #MISSING, or blocks are created but have no data, this can take some time.)
5. Planning then queries the relational database, and marks every cell to show whether it has supporting details and cell text.
6. The Web data form is then presented to the user.

High Level Comparison of OPM Financials R12 and 11i

Changes to OPM Fiscal Policy setup

In Release 11.5.10 the Fiscal Policy (GL_PLCY_MST) record is linked to an OPM Company, and defines among other things which Operating Unit that Company is linked to and which Set of Books is used when looking at Account Code Combinations.

From Release 12 the Fiscal Policy record is keyed on Legal Entity and Ledger.  There is no longer any requirement to set up ‘Segment’ information linking MAC Accounts and GL Code Combinations.

From Release 12 the Event Fiscal Policy form (basically information relating to postings for Purchasing transactions) is accessed directly from the Fiscal Policies form GLFISCP.fmb – previously it was a separate menu entry.

OPM Organizations / Inventory Organizations

As is well known, from Release 12 the OPM and Oracle Inventory products have ‘converged’, meaning that the Release 11i entities in OPM like Organizations (SY_ORGN_MST) and Warehouses (IC_WHSE_MST) have disappeared.

Furthermore all transaction information is now saved in the Discrete Inventory tables like MTL_Material_Transactions – the Release 11i tables IC_TRAN_PND and IC_TRAN_CMP are not used at all.

Although Item Cost information is still held in OPM-specific tables (CM_CMPT_DTL, GL_ITEM_CST), all tables now use Organization_Id where previously Company Code and/or Warehouse Code would have appeared.  The Organization_Id column always joins to View ORG_Organization_Definitions.

Fir this reason you will see that the Release 11i ‘Cost Warehouse Associations’ setup form is now ‘Cost Organization Associations’.

OPM Items

In Release 11i each Item Number appeared once only in table IC_ITEM_MST.  This table is now obsolete, and all Items (Process and Discrete) are defined in table MTL_System_Items.

Each Item can be assigned to multiple Organizations (both Discrete and Process-Enabled) and appears that many times in the MTL_System_Items table.

From Release 12 the restriction where OPM Item Numbers could only consist of one Segment is lifted.

Cost Method / Cost Type

Name changes from Cost Method to Cost Type, but the setup form (CMMTMSED.fmb) is still owned by and accessed from OPM Financials, and writes to table CM_MTHD_MST.

Cost Calendars

In Release 11i Cost Calendars are keyed on Company Code and Cost Method.

From Release 12 they are still owned by OPM Financials (tables CM_CLDR_HDR, CM_CLDR_DTL) but are defined as an independent entity which can then be assigned to one or more Legal Entity and Cost Type combinations.

Costing Basics

OPM Costing Types (Actual, Standard, Lot) are still available as before.

In Release 12 you still need to run your cost calculation process and then Cost Update in order to populate tables GL_ITEM_CST and GL_ITEM_DTL ready for Period-End processing.

OPM Inventory Period Close

In Release 11i ‘Inventory Close’ is run from the OPM Inventory Control menu, and populates table IC_PERD_BAL.

From Release 12 the same function is carried out by concurrent program ‘GMF Period Close Process for Process Organizations’, run from the standard report submission form.  This writes to table GMF_Period_Balances and can be run ‘Final’ or ‘not Final’ as before.

Manufacturing Accounting Controller vs Subledger Accounting

Although the change from MAC to SLA appears to be massive and daunting, once you are familiar with the setup hierarchy in Release 12, that is to say:


Subledger Accounting Method (SLAM)
Application Accounting Definition (AAD)
Event / Journal Line Definition (JLD)
Account Derivation Rules (ADR)

.. you will find parallels between the Release 11i and Release 12 setup.

Release 11i Release 12
Events : (PORC, BTCH, OMSO, IADJ and so on) Entities
Sub-Events (CERT, RELE) Event Class > Event Type
Sub-Event link to Account Titles Journal Line Definition link to Journal Line Types
Account Mapping Account Derivation Rules

The Release 11i concept of mapping an Account in two halves (the Accounting Unit and the Account Number) no longer exists in Release 12, however the logic which you can apply in Account Derivation Rules is enormously powerful and flexible.

Depending on the transaction type in question (for example an MTL_Material_Transactions row) you can use a wide range of column values in the source transaction to drive the selection of either the entire Code Combination as a single string, or individual Segment values.

Contrast this with the Release 11i mapping setup, where you could specify which Account Number was to be used based on a relatively short list of available Selection Criteria.

There is a basic accounting setup seeded in a Release 12 database but you will need to update this.  Seeded records always show an Owner of ‘Oracle’, and to change the setup you must always make a copy of the relevant record which will be owned by ‘User’, and this can then be updated.

Custom Sources in Derivation Rules

As noted above, the list of ‘Sources’ upon which you can base Account (or Segment) Derivation logic is limited to columns which appear in the relevant transaction record.

You can extend this list of Sources to look at data in other tables by providing your own PL/SQL code.  For example, say the current transaction record contains an Inventory Item Id but not the Item Class, but you wish to post to different Accounts based on the Item Class.

This can be achieved – please see Note 1356816.1  (“How to Create a Custom Source for use in OPM Financials Create Accounting”) for details.

Mapping Sets

Another way in which you can extend the Account Derivation logic is by using Mapping Sets.

A Mapping Set is basically a translation table – one value is passed in and a different value is returned.  For example you might have 10 ‘branch’ Organizations but the same Account is to be used for six of these, and another Account for the other four.

You can code for this using the regular ADR logic, but the neater solution is group these Organizations in a Mapping Set which is then associated with the ADR.

See Note 1115473.1 for a worked example.

Period End Processing

After Item Costs have been finalized and the OPM Cost Update process run, in Release 12 the basic process is :


– run the OPM Accounting Pre-Processor (GMFXUPD)
– run ‘Create Accounting for OPM Financials’ in Draft mode
– run ‘Create Accounting for OPM Financials’ in Final mode

The Accounting Pre-Processor is fundamentally the same process as the Release 11i Subledger Update program.  Until Create Accounting is run in ‘Final’ mode, the Pre-Processor can be run over and over again for the current Period until it is successful (that is to say, there are no unexpected errors reported, and the results from the Detailed Subledger Report are as expected)

Create Accounting sweeps through all the rows written by the Accounting Pre-Processor and builds postings which can optionally be posted automatically to GL..


In Release 11i the processing of Order Management transactions caused postings to the COGS (Cost of Goods Sold) Account by default.

In Release 12 the default posting will be to Deferred COGS (DCOGS) unless the three COGS Recognition concurrent processes (owned by the INV Product) have been successfully run before the OPM Accounting Pre-Processor.

The three processes must be run in this sequence:

– Record Order Management Transaction
– Collect Revenue Recognition Information
– Generate COGS Recognition Events.

Saving Planning Webform / Smartview Results in Error : “There was an error during the save process”

On : version, Product Usage

When attempting to submit data in Hyperion Planning web form ,
the following error occurs.

There was an error during the save process

This is due to Essbase Exceeded block limit.

Was identified from the Essbase log with the below error Message:

Request [SpreadsheetOperation] from user [admin@Native Directory] was terminated since it exceeded the block limit

Remove the setting QRYGOVEXECBLK from Essbase.cfg file, restart service for Essbase to fix the issue.



DRM- How to Lock DRM Users Out of Applicaition for Short Periods of Time

What are the ways to lock DRM users out for short periods of time?

There is no one way to lock people out of DRM. Some workarounds would include the following depending on how DRM is configured:

1. If user are CSS ( shared service users) one could change the authentication mode of DRM to be just internal. Bounce DRM; Then use only an internal user to run the updates.

2. One could go through and change each user in DRM to Lock them out but this is very manual and if there are a lot of users , time consuming

3. If user are apart of NAG’s, then all of the roles could be revoked, then added back after the black out period is over