How to Configure an Existing EPM 11.1.2.4 With Java 7


To configure an existing EPM 11.1.2.4 Environment to use Java 7

This document is written for both Unix and Windows and assumes that you already have a working EPM 11.1.2.4 environment. Reconfiguring EPM to use Java7 is a manual process. Java7 will replace both jdk1.6 and jrocket1.6.

Note that 11.1.2.4 is the only EPM version currently certified for Java7.

The paths shown are for Windows. Just change them accordingly for if you are on Unix.

1. Download the latest Java7 patch provided in the quarterly Critical Patch Update Advisory for your OS and install to F:\Oracle\Middleware

In this instance, Patch 26916855 from the Oracle Critical Patch Update Advisory – October 2017 was used

2. Change Directory to F:\Oracle\Middleware\EPMSystem11R1\common\config\11.1.2.0 and backup the setJavaRuntime.(bat/sh) script. Then change all occurrences of jdk160_35 to jdk1.7.0_161

3. Change Directory to F:\Oracle\Middleware\user_projects\domains\EPMSystem\bin and backup the WebLogic setDomainEnv.(cmd/sh) script. Then change all occurrences of jrockit_160_37 to jdk1.7.0_161

4. Windows Only – Backup the Hyperion Solutions Branch of the Windows Registry

5. Windows Only – Update the Windows Registry via the regedit.

For ALL EPM Services Change the JavaDLL to F:\Oracle\Middleware\jdk1.7.0_161\jre\bin\server\jvm.dll

6. Change directory to /Oracle/Middleware/user_projects/epmsystem1/bin/deploymentScripts and backup all scripts. Modify each of the setCustomParamsProduct.sh and remove the

“-XX:-FlightRecorder” option in each one. This option is not valid in Java7.

7. For Essbase – Change directory to F:\Oracle\Middleware\user_projects\epmsystem1\config\OPMN\opmn and backup the opmn.xml file. Open the opmn.xml file change the one occurrence of jdk160_35 to jdk1.7.0_161

8. For Essbase – (This step may not be required in your environment). By default Essbase gets the location of Java from OPMN, but if you have modified your essbase.cfg and hardcoded the Java location, then you must make a change to the essbase.cfg.

Change Directory to F:\Oracle\Middleware\user_projects\epmsystem1\EssbaseServer\essbaseserver1\bin. If the JvmModuleLocation is not proceeded by a semi colon then you must modify the java location changing jdk160_35 to java1.7.0_161

9. For Profitability – the option “-XXcompressedRefs:size=32GB” should be removed from the corresponding setCustomParams file.

10. Client Changes

EssbaseAdministration Console
Change Directory to F:\Oracle\Middleware\EPMSystem11R1\products\Essbase\eas\console\bin\admincon.bat
Modify the admincon.bat and change jdk160_35 to jdk170_161

Financial Reporting Client
Change Directory to F:\Oracle\Middleware\EPMSystem11R1\products\financialreporting\bin
Modify the setJavaRuntime.cmd script changing jdk160_35 to jdk170_161

11. Start all services and Test

Thanks,
~KKT~

Advertisements

DRM External Connection Using TNS Alias Name for Oracle DataBase failover


How to setup TNS alias name with DRM to use failover with Oracle Database?

* Navigate to the Instant Client ODP.NET DB driver located in DRM app server:
E:\Oracle\Middleware\EPMSystem11R1\products\DataRelationshipManagement\server\odp\network\admin

* Create a TNSNAMES.ORA file with the TNS ALIAS information

* During the External connection creation for Application in DRM Console, Select the Database as ‘Oracle’ and in the connection string specify the value as :
” Data Source = ”

* Test and verify the DB connection.

Thanks,
~KKT~

How to set up FDMEE on premise connection to Cloud through proxy


Steps to add JVM Options so that an FDMEE server can connect to Cloud environments via proxy.

First, make sure the server you are going to configure is able to ping the Cloud url.

1. Edit the setCustomParamErpIntegrator.bat/sh under

…\Middleware\user_projects\epmsystem1\bin\deploymentscripts

and add the following options to the JAVA_OPTIONS section:

-Dhttps.proxyHost=<Proxy Server> -Dhttps.proxyPort=<Proxy Port>

Example:

set JAVA_OPTIONS=-Dhyperion.home=C:\Oracle\Middleware\EPMSystem11R1 -DHYPERION_HOME=C:\Oracle\Middleware\EPMSystem11R1 -DEPM_ORACLE_HOME=C:\Oracle\Middleware\EPMSystem11R1 -DEPM_ORACLE_INSTANCE=C:\Oracle\Middleware\user_projects\epmsystem1 -Dlogging.folder=C:\Oracle\Middleware\user_projects\domains\EPMSystem/servers/ErpIntegrator0/logs/ -Dserver.name=ErpIntegrator0 -Dsun.net.inetaddr.ttl=0 -Djava.io.tmpdir=C:\Oracle\Middleware\user_projects\epmsystem1/tmp -Doracle.jdbc.useFetchSizeWithLongColumn=true -Xrs -XX:-FlightRecorder -DcomponentId=e0fc20b3237d72e8S2ea8412e157c18cf1c3S7ed8 -DESS_ES_HOME=C:\Oracle\Middleware\EPMSystem11R1/common/EssbaseJavaAPI/11.1.2.0 -DDOMAIN_HOME=C:\Oracle\Middleware\user_projects\domains\EPMSystem -Dcom.hyperion.planning.datamodel=64 -Djava.library.path=C:\Oracle\Middleware\EPMSystem11R1/products/FinancialDataQuality/lib/64-bit;C:\Oracle\Middleware\EPMSystem11R1/common/EssbaseRTC-64/11.1.2.0/bin; -DPATH=C:\Oracle\Middleware\EPMSystem11R1/products/FinancialDataQuality/lib/64-bit;C:\Oracle\Middleware\EPMSystem11R1/common/EssbaseRTC-64/11.1.2.0/bin; -Dhttps.proxyHost=<Proxy Server>; -Dhttps.proxyPort=<Proxy Port> %JAVA_OPTIONS%

** Note that your proxy server and proxy port should be substituted for <Proxy Server> and <Proxy Port> **

2. If this is a windows server, you should add two more JVMOptions to the windows registry. First you must update the JVMOptionCount and increase the decimal value by 2. Then add new keys:

Example:

JVMOption51 -Dhttps.proxyHost=<Proxy Server>

JVMOption52 -Dhttps.proxyPort=<Proxy Port>

JVMOptions

after adding the entries restart the FDMEE Server. Review ErpIntegrator.log to make sure the JVM properties are getting set. System will print all the options set.

Thanks,
~KKT~

FDMEE connection to Teradata setup


Follow the below steps –

++ Download the JDBC driver

In order to access the Teradata database, first need a compatible Teradata JDBC driver.

Download Jdbc drivers(TeraJDBC__indep_indep.16.20.00.02.tar/zip) from the Teradata download website. Without these drivers, the connector will not function correctly.

https://downloads.teradata.com/download/connectivity/jdbc-driver

++ Deploy the JDBC driver files to the appropriate location. Steps are given below.

i) Extract/Copy the downloaded files:

terajdbc4.jar
tdgssconfig.jar

ii) Copy above files into FDMEE Server and location could be under /oracle/Middleware/user_projects/domains/EPMSystem/lib

iii) Set LD_LIBRARY_PATH environment variable to the directory path where jar files were placed (/oracle/Middleware/user_projects/domains/EPMSystem/lib)

++ Stop/Start FDMEE Services

Thanks,
~KKT~

How to create a SmartView data source connection to Oracle Analytics Cloud – Essbase


After you install Oracle Smart View for Office, you can create connections to Oracle Analytics Cloud – Essbase.

Connections require information about the server and port. Your cloud service administrator should provide you with the information you need to create the connection. Use the quick connection method to create a private connection to the cloud service.

To create a private connection using the quick connection method:

1. In Excel, select the Smart View ribbon, and then click Panel.

2. On the Smart View Panel, click the arrow next to of the Switch to button Image of the Switch to button., and then select Private Connections from the list.

3. Still in the Smart View Panel, in the text box, enter the URL for the data source to which you want to connect.

4. Enter he URL syntax:

https://server/essbase/smartview
5. Click Go. or press Enter.

On the login window, enter your login credentials, and then select a data source from the drop-down menu.

Thanks,

~KKT~

Essbase: Error(1270004) Cannot complete operation: insufficient aggregate storage cache size


When adding a member to ASO Outline, it errors when restructuring the database.   The applicaition log shows the following error:

[Wed Oct 25 08:27:38 2017]Local/App/App/user@Directory/8732/Error(1270004)
Cannot complete operation: insufficient aggregate storage cache size

[Wed Oct 25 08:27:39 2017]Local/App/App/user@Directory/8732/Error(1007045)
Restructuring of Database [App] Failed

This is due to Size of ASO application cache is not sufficient. The size of Application Cache depends on volume of data, metadata, formula, data distribution, etc.

To fix this Increase the Pending Cache Size limit.  It is recommended to set the cache at least 250 to 500MB.
Note: This value is derived based on most application requirements but if you have a larger application this value can be increased beyond 500 MB.

From EAS Console:

1. Right click on the ASO Application -> Edit Properties.
2. Set the Pending cache size limit (MB) at least 250 to 500 MB.
3. Click on Apply and Close.
4. Restart the ASO Application.

Using Maxl: (Here we are setting the cache size to 250 MB as per above recommendation value of 250 to 500 MB).

alter system load application <APP>;
query application <APPname> get cache_size;
alter application <APPname> set cache_size 250MB;
alter system unload application <APPname>;
alter system load application <APPname>;
query application <APPname> get cache_size;

Essbase-How Do I Generate Delimited Output Tables for MaxL and MDX


Create tables or grids of data that can be more easily be read by programs (such as Microsoft Excel, RDBMS, or custom parsers) that can import delimiter-separated file formats such as CSV.

 

For the following types of MaxL and MDX statements, whose output displays tables or grids of data, you can set the MaxL Shell (essmsh) to create spool (output) files that are more easily readable by programs (such as Microsoft Excel, RDBMS, or custom parsers) that can import delimiter-separated file formats such as CSV.

· Select statements (MDX)

· Display statements (MaxL)

· Query statements (MaxL)

 

To turn on delimiters for tabular output, use a MaxL Shell command to turn on column delimiters in the  MaxL script file. The syntax is:

set column_separator STRING;

where STRING is the column separator.

For example, to create tab-delimited output that Excel can read,

set column_separator ” “;

where the double quotation marks enclose a single Tab.

The column separator can be any string. For example:

set column_separator “*|*”;

To turn delimiters off after the select, display, or query statement has executed, use:

set column_separator off;

To turn column headers on or off, the syntax is:

set column_header on|off;

Column headers are on by default.

To control whether issued statements are repeated in the spool file, turn echo mode on or off. The syntax is:

set echo_mode on|off;

Echo mode is on by default.

 

The tabular output is saved in MaxL spool files. A MaxL Shell spool file is a text file of the shell session and its output.

To make spool files contain only the delimited tabular output that you want,

a. Use non interactive mode to run a MaxL script file. Non interactive mode means that MaxL Shell must be invoked with a script file name provided as the first (or only) argument.

b. Ensure that the spool on command is the last MaxL Shell operation in the script before the select, display, or query statement, and that the spool off command follows the final statement that you are interested in printing to the spool file.

c. Set the message level to warning; otherwise, informational messages will be printed in the spool file.
set message_level warning;

d. Turn off echo mode; otherwise, statements and newlines will be printed in the spool file.
set echo_mode off;

e. Optionally, turn off the printing of the header row in the spool file.
set column_header off;

The spool file contains a concatenation of all the select, display, or query statement results. Each result is separated by a single empty line.

 

Sample Script  

Note: This sample code is provided for educational purposes only and not supported by Oracle . It has been tested internally, however, and works as documented.
/* file name: report0.msh: */

login ‘admin’ identified by ‘password’ on ‘localhost’;

set column_width 200;

/* to make spool file cleaner, return only warnings */

set message level warning;

set column_separator “#~”;

set column_header off;

set echo_mode off;

spool on to ‘output0mdxnohead.txt’;

select {[Mar],[Apr]} on columns,

{[300],[400]} dimension properties

level_number, member_unique_name on rows,

crossjoin({[Actual],[Budget]},

{[Opening Inventory],[Ending Inventory]})

dimension properties level_number, member_unique_name on pages

from [Sample].[Basic];

spool off;

set column_header on;

spool on to ‘output0mdxhead.txt’;

select {[Mar],[Apr]} on columns,

{[300],[400]} dimension properties

level_number, member_unique_name on rows,

crossjoin({[Actual],[Budget]},

{[Opening Inventory],[Ending Inventory]})

dimension properties level_number, member_unique_name on pages

from [Sample].[Basic];

spool off;

/* More width for display database, as member names are ~4kb */

set column_width 10000;

set column_header on;

spool on to ‘output0display.txt’;

display session all;

display database Sample.Basic;

spool off;

spool on to ‘output0querydbstats.txt’;

query database Sample.Basic get dbstats dimension;

spool off;

logout;

exit;

 

Example Output Files Created:

When run in non-interactive mode (essmsh report0.msh), the resulting spool files have the following contents:

MDX output without header (output0mdxnohead.txt):

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Cream Soda#~1#~300#~29095#~30334

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Fruit Soda#~1#~400#~26409#~27588

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Cream Soda#~1#~300#~30334#~32266

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Fruit Soda#~1#~400#~27588#~29550

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Cream Soda#~1#~300#~27380#~28460

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Fruit Soda#~1#~400#~27230#~29030

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Cream Soda#~1#~300#~28460#~30190

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Fruit Soda#~1#~400#~29030#~31520

MDX output with header (output0mdxhead.txt):

Scenario#~Measures#~Scenario.LEVEL_NUMBER#~Scenario.MEMBER_UNIQUE_NAME#~Measures.LEVEL_NUMBER#~Measures.MEMBER_UNIQUE_NAME#~Product#~Product.LEVEL_NUMBER#~Product.MEMBER_UNIQUE_NAME#~Mar#~Apr

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Cream Soda#~1#~300#~29095#~30334

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Fruit Soda#~1#~400#~26409#~27588

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Cream Soda#~1#~300#~30334#~32266

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Fruit Soda#~1#~400#~27588#~29550

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Cream Soda#~1#~300#~27380#~28460

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Fruit Soda#~1#~400#~27230#~29030

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Cream Soda#~1#~300#~28460#~30190

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Fruit Soda#~1#~400#~29030#~31520

Multiple display statements concatenated (output0display.txt):

user#~session#~login_time#~application#~database#~db_connect_time#~request#~request_time#~connection_source#~connection_ip#~request_state

admin#~0#~2#~Sample#~Basic#~0#~none#~0#~localhost.localdomain*#~127.0.0.1#~

application#~database#~comment#~startup#~autostartup#~minimum permission#~aggregate_missing#~two_pass_calc#~create_blocks#~data_cache_size#~file_cache_size#~index_cache_size#~index_page_size#~cache_pinning#~compression#~retrieve_buffer_size#~retrieve_sort_buffer_size#~io_access_mode#~pending_io_access_mode#~no_wait#~committed_mode#~pre_image_access#~lock_timeout#~commit_blocks#~commit_rows#~currency_database#~currency_member#~currency_conversion#~note#~db_type#~read_only_mode#~db_status#~elapsed_time#~users_connected#~blocks_locked#~number_dimensions#~number_disk_volume#~data_status#~current_data_cache#~current_file_cache#~current_index_cache#~current_index_page#~currency_country_dim#~currency_time_dim#~currency_category_dim#~currency_type_dim#~currency_partition_dim

Sample#~Basic#~#~TRUE#~TRUE#~no_access#~FALSE#~TRUE#~FALSE#~3145728#~33554432#~1048576#~8192#~FALSE#~2#~20480#~20480#~1#~1#~FALSE#~2#~TRUE#~20#~3000#~0#~#~#~1#~#~0#~FALSE#~2#~00:00:00:01#~1#~0#~10#~0#~0#~0#~0#~1048576#~8192#~#~Year#~Measures#~#~

Single query statement (output0querydbstats.txt):

dimension_name#~type#~declared_size#~actual_size

Year#~0#~19#~12

Measures#~0#~17#~8

Product#~1#~22#~19

Market#~1#~25#~25

Scenario#~0#~5#~2

Caffeinated#~1#~3#~0

Ounces#~1#~5#~0

Pkg Type#~1#~3#~0

Population#~1#~15#~0

Intro Date#~1#~8#~0

DataExport Works Differently between Essbase Versions 11.1.2.2. and 11.1.2.4


Dears,

I notice this issue and just thought of sharing with everyone.

After migrating to Essbase v11.1.2.4.x from v11.1.2.2.x, some of the DataExport options set in a calc script do not work. For example, exporting data using this set of DataExport options:

DataExportDynamicCalc ON;
DATAEXPORTNONEXISTINGBLOCKS OFF;

On v11.1.2.2 gave the result:

OK/INFO – 1012698 – This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks..
OK/INFO – 1012690 – Data Export Completed. Total blocks: [2705]. Elapsed time: [972.51]..
OK/INFO – 1012696 – Total Number of Non-Missing cells exported: [72109]..
OK/INFO – 1012550 – Total Calc Elapsed Time : [972.51] seconds.
OK/INFO – 1013274 – Calculation executed.

After migration to v11.1.2.4, no data is exporting:

OK/INFO – 1012698 – This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks..
OK/INFO – 1012690 – Data Export Completed. Total blocks: [0]. Elapsed time: [0.58]..
OK/INFO – 1012696 – Total Number of Non-Missing cells exported: [0]..
OK/INFO – 1012550 – Total Calc Elapsed Time : [0.582] seconds.
OK/INFO – 1013274 – Calculation executed.

One of the major issues faced when using the dynamic calc export feature of DataExport is the number of blocks DataExport has to visit to export all data. This number can potentially be very high for large outlines with large number of members in the sparse dimensions. In this case DataExport has to visit all combination of sparse dimension and has to materialize all possible blocks to export the data. This includes large number of sparse dynamic blocks. Running dynamic calculation on such large number of blocks will significantly slow down the DataExport performance and it will look to be hung.

It was decided to modify the default behavior of dynamic calc export of DataExport. Instead of materializing and visiting all blocks now DataExport will export only existing blocks. This means the dynamic calculation will be run for existing blocks only. All sparse dynamic blocks will be skipped. Any FIX on the sparse dynamic calc members will be ignored. The above behavior can be configured using one new option added to the set to existing DataExport options ‘DataExportNonExistingBlocks ON/OFF’. The default value of this option is OFF. By default DataExport will export existing blocks only. If this option is turned ON DataExport will export all potential blocks (combination of sparse members).

The following two INFO messages are added in DataExport usage:

1. DataExportNonExistingBlocks is OFF (default behavior): This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks.

2. DataExportNonExistingBlocks is ON: DataExport Warning: This DataExport operation will export a total of [] blocks. Exporting dynamic calc members from all blocks has significant performance overhead. Use DATAEXPORTNONEXISTINGBLOCKS OFF option to export data from existing blocks only.

 

As the behavior has changed, to export dynamic blocks the following option needs to enabled in the calc script:

DataExportNonExistingBlocks ON;
This should fix the issue.
Thanks,
~KKT~

How To Purge FDMEE Files


Under FDMEE instance, there is data and outbox directory which is getting bigger (80GB). How do we clean up those directories? What is the standard process? There are some some other files such as .ind and .dat direxctly under FDMEE instance. How do we clean those up?

In-order to purge the FDMEE files, there is an option under “workflow” > Go to > Script Execution > Click on System Maintenance Tasks > there you have the scripts to Purge files.

These scripts run based on the application, you need to select the start and the end period along with the category.

This will help you to purge files.

Maintain Application Folder
======================

The Maintain Application Folder process purges files from the inbox, outbox, and data folders directory. Oracle Hyperion Financial Data Quality Management, Enterprise Edition accepts a separate Days to Keep parameter for each of the folders. If the value is not specified for a specific folder, FDMEE skips the folder.

In addition, FDMEE checks the inbox and outbox sub-directories under the respective folders and deletes any files. In the data, FDMEE skips the scripts directory because it holds customer scripts.

Maintain Process Table

This process maintains the following execution tables:

AIF_PROCESSES

  AIF_PROCESS_DETAILS

  AIF_PROCESS_LOGS

  AIF_PROCESS_PARAMETERS

  AIF_PROCESS_PERIODS

  AIF_PROCESS_STEPS

  AIF_BAL_RULE_LOADS

  AIF_BAL_RULE_LOAD_PARAMS

  AIF_BATCH_JOBS

  AIF_BATCH_LOAD_AUDIT

  AIF_TEMP

It accepts the number of days to keep as a parameter.

Thanks,

~KKT~

FDMEE Runs Out of Resources


Dears,

From last month we noticed that FDMEE software has been going down multiple times a day. Below error message we are receiving in the logs

Error Message: weblogic.jdbc.extensions.PoolLimitSQLException: weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool aif_datasource to allocate to applications, please increase the size of the pool and retry..

Increase the connection pool for jdbc source (aif-source) from default 120 to 200 then 300

so we check and Currently the Maximum JVM heap Size is set to a large value, but the minimum JVM heap Size does not match the Max. Follow below steps to update.

A) Update the Min and Max JVM heap Sizes to Match each other (xms 5120 from xms 128 as an example to increase to 5 GB)

Start > Run > RegEdit

HKEY_LOCAL_MACHINE > Software > Hyperion Solutions > ERPintegrator0 > HyS9AIFWeb_epmsystem1

Locate the JVMOption value for -Xmsxxxm

Locate the JVMOption value for -Xmx5120m

Make sure the two match

B) Verify that the AIF_DataSource JDBC Connection Pool Max is set to a large enough value (depends on number of users)

C Verify that the ODIMASTER JDBC Connection Pool Max is set to the same value as the AIF_Datasource JDC Connection Pool

D) Recycle the FDMEE Web Application

Thanks,

~KKT~