Essbase: Error(1270004) Cannot complete operation: insufficient aggregate storage cache size


When adding a member to ASO Outline, it errors when restructuring the database.   The applicaition log shows the following error:

[Wed Oct 25 08:27:38 2017]Local/App/App/user@Directory/8732/Error(1270004)
Cannot complete operation: insufficient aggregate storage cache size

[Wed Oct 25 08:27:39 2017]Local/App/App/user@Directory/8732/Error(1007045)
Restructuring of Database [App] Failed

This is due to Size of ASO application cache is not sufficient. The size of Application Cache depends on volume of data, metadata, formula, data distribution, etc.

To fix this Increase the Pending Cache Size limit.  It is recommended to set the cache at least 250 to 500MB.
Note: This value is derived based on most application requirements but if you have a larger application this value can be increased beyond 500 MB.

From EAS Console:

1. Right click on the ASO Application -> Edit Properties.
2. Set the Pending cache size limit (MB) at least 250 to 500 MB.
3. Click on Apply and Close.
4. Restart the ASO Application.

Using Maxl: (Here we are setting the cache size to 250 MB as per above recommendation value of 250 to 500 MB).

alter system load application <APP>;
query application <APPname> get cache_size;
alter application <APPname> set cache_size 250MB;
alter system unload application <APPname>;
alter system load application <APPname>;
query application <APPname> get cache_size;

Advertisements

Essbase-How Do I Generate Delimited Output Tables for MaxL and MDX


Create tables or grids of data that can be more easily be read by programs (such as Microsoft Excel, RDBMS, or custom parsers) that can import delimiter-separated file formats such as CSV.

 

For the following types of MaxL and MDX statements, whose output displays tables or grids of data, you can set the MaxL Shell (essmsh) to create spool (output) files that are more easily readable by programs (such as Microsoft Excel, RDBMS, or custom parsers) that can import delimiter-separated file formats such as CSV.

· Select statements (MDX)

· Display statements (MaxL)

· Query statements (MaxL)

 

To turn on delimiters for tabular output, use a MaxL Shell command to turn on column delimiters in the  MaxL script file. The syntax is:

set column_separator STRING;

where STRING is the column separator.

For example, to create tab-delimited output that Excel can read,

set column_separator ” “;

where the double quotation marks enclose a single Tab.

The column separator can be any string. For example:

set column_separator “*|*”;

To turn delimiters off after the select, display, or query statement has executed, use:

set column_separator off;

To turn column headers on or off, the syntax is:

set column_header on|off;

Column headers are on by default.

To control whether issued statements are repeated in the spool file, turn echo mode on or off. The syntax is:

set echo_mode on|off;

Echo mode is on by default.

 

The tabular output is saved in MaxL spool files. A MaxL Shell spool file is a text file of the shell session and its output.

To make spool files contain only the delimited tabular output that you want,

a. Use non interactive mode to run a MaxL script file. Non interactive mode means that MaxL Shell must be invoked with a script file name provided as the first (or only) argument.

b. Ensure that the spool on command is the last MaxL Shell operation in the script before the select, display, or query statement, and that the spool off command follows the final statement that you are interested in printing to the spool file.

c. Set the message level to warning; otherwise, informational messages will be printed in the spool file.
set message_level warning;

d. Turn off echo mode; otherwise, statements and newlines will be printed in the spool file.
set echo_mode off;

e. Optionally, turn off the printing of the header row in the spool file.
set column_header off;

The spool file contains a concatenation of all the select, display, or query statement results. Each result is separated by a single empty line.

 

Sample Script  

Note: This sample code is provided for educational purposes only and not supported by Oracle . It has been tested internally, however, and works as documented.
/* file name: report0.msh: */

login ‘admin’ identified by ‘password’ on ‘localhost’;

set column_width 200;

/* to make spool file cleaner, return only warnings */

set message level warning;

set column_separator “#~”;

set column_header off;

set echo_mode off;

spool on to ‘output0mdxnohead.txt’;

select {[Mar],[Apr]} on columns,

{[300],[400]} dimension properties

level_number, member_unique_name on rows,

crossjoin({[Actual],[Budget]},

{[Opening Inventory],[Ending Inventory]})

dimension properties level_number, member_unique_name on pages

from [Sample].[Basic];

spool off;

set column_header on;

spool on to ‘output0mdxhead.txt’;

select {[Mar],[Apr]} on columns,

{[300],[400]} dimension properties

level_number, member_unique_name on rows,

crossjoin({[Actual],[Budget]},

{[Opening Inventory],[Ending Inventory]})

dimension properties level_number, member_unique_name on pages

from [Sample].[Basic];

spool off;

/* More width for display database, as member names are ~4kb */

set column_width 10000;

set column_header on;

spool on to ‘output0display.txt’;

display session all;

display database Sample.Basic;

spool off;

spool on to ‘output0querydbstats.txt’;

query database Sample.Basic get dbstats dimension;

spool off;

logout;

exit;

 

Example Output Files Created:

When run in non-interactive mode (essmsh report0.msh), the resulting spool files have the following contents:

MDX output without header (output0mdxnohead.txt):

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Cream Soda#~1#~300#~29095#~30334

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Fruit Soda#~1#~400#~26409#~27588

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Cream Soda#~1#~300#~30334#~32266

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Fruit Soda#~1#~400#~27588#~29550

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Cream Soda#~1#~300#~27380#~28460

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Fruit Soda#~1#~400#~27230#~29030

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Cream Soda#~1#~300#~28460#~30190

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Fruit Soda#~1#~400#~29030#~31520

MDX output with header (output0mdxhead.txt):

Scenario#~Measures#~Scenario.LEVEL_NUMBER#~Scenario.MEMBER_UNIQUE_NAME#~Measures.LEVEL_NUMBER#~Measures.MEMBER_UNIQUE_NAME#~Product#~Product.LEVEL_NUMBER#~Product.MEMBER_UNIQUE_NAME#~Mar#~Apr

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Cream Soda#~1#~300#~29095#~30334

Actual#~Opening Inventory#~0#~Actual#~0#~Opening Inventory#~Fruit Soda#~1#~400#~26409#~27588

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Cream Soda#~1#~300#~30334#~32266

Actual#~Ending Inventory#~0#~Actual#~0#~Ending Inventory#~Fruit Soda#~1#~400#~27588#~29550

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Cream Soda#~1#~300#~27380#~28460

Budget#~Opening Inventory#~0#~Budget#~0#~Opening Inventory#~Fruit Soda#~1#~400#~27230#~29030

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Cream Soda#~1#~300#~28460#~30190

Budget#~Ending Inventory#~0#~Budget#~0#~Ending Inventory#~Fruit Soda#~1#~400#~29030#~31520

Multiple display statements concatenated (output0display.txt):

user#~session#~login_time#~application#~database#~db_connect_time#~request#~request_time#~connection_source#~connection_ip#~request_state

admin#~0#~2#~Sample#~Basic#~0#~none#~0#~localhost.localdomain*#~127.0.0.1#~

application#~database#~comment#~startup#~autostartup#~minimum permission#~aggregate_missing#~two_pass_calc#~create_blocks#~data_cache_size#~file_cache_size#~index_cache_size#~index_page_size#~cache_pinning#~compression#~retrieve_buffer_size#~retrieve_sort_buffer_size#~io_access_mode#~pending_io_access_mode#~no_wait#~committed_mode#~pre_image_access#~lock_timeout#~commit_blocks#~commit_rows#~currency_database#~currency_member#~currency_conversion#~note#~db_type#~read_only_mode#~db_status#~elapsed_time#~users_connected#~blocks_locked#~number_dimensions#~number_disk_volume#~data_status#~current_data_cache#~current_file_cache#~current_index_cache#~current_index_page#~currency_country_dim#~currency_time_dim#~currency_category_dim#~currency_type_dim#~currency_partition_dim

Sample#~Basic#~#~TRUE#~TRUE#~no_access#~FALSE#~TRUE#~FALSE#~3145728#~33554432#~1048576#~8192#~FALSE#~2#~20480#~20480#~1#~1#~FALSE#~2#~TRUE#~20#~3000#~0#~#~#~1#~#~0#~FALSE#~2#~00:00:00:01#~1#~0#~10#~0#~0#~0#~0#~1048576#~8192#~#~Year#~Measures#~#~

Single query statement (output0querydbstats.txt):

dimension_name#~type#~declared_size#~actual_size

Year#~0#~19#~12

Measures#~0#~17#~8

Product#~1#~22#~19

Market#~1#~25#~25

Scenario#~0#~5#~2

Caffeinated#~1#~3#~0

Ounces#~1#~5#~0

Pkg Type#~1#~3#~0

Population#~1#~15#~0

Intro Date#~1#~8#~0

DataExport Works Differently between Essbase Versions 11.1.2.2. and 11.1.2.4


Dears,

I notice this issue and just thought of sharing with everyone.

After migrating to Essbase v11.1.2.4.x from v11.1.2.2.x, some of the DataExport options set in a calc script do not work. For example, exporting data using this set of DataExport options:

DataExportDynamicCalc ON;
DATAEXPORTNONEXISTINGBLOCKS OFF;

On v11.1.2.2 gave the result:

OK/INFO – 1012698 – This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks..
OK/INFO – 1012690 – Data Export Completed. Total blocks: [2705]. Elapsed time: [972.51]..
OK/INFO – 1012696 – Total Number of Non-Missing cells exported: [72109]..
OK/INFO – 1012550 – Total Calc Elapsed Time : [972.51] seconds.
OK/INFO – 1013274 – Calculation executed.

After migration to v11.1.2.4, no data is exporting:

OK/INFO – 1012698 – This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks..
OK/INFO – 1012690 – Data Export Completed. Total blocks: [0]. Elapsed time: [0.58]..
OK/INFO – 1012696 – Total Number of Non-Missing cells exported: [0]..
OK/INFO – 1012550 – Total Calc Elapsed Time : [0.582] seconds.
OK/INFO – 1013274 – Calculation executed.

One of the major issues faced when using the dynamic calc export feature of DataExport is the number of blocks DataExport has to visit to export all data. This number can potentially be very high for large outlines with large number of members in the sparse dimensions. In this case DataExport has to visit all combination of sparse dimension and has to materialize all possible blocks to export the data. This includes large number of sparse dynamic blocks. Running dynamic calculation on such large number of blocks will significantly slow down the DataExport performance and it will look to be hung.

It was decided to modify the default behavior of dynamic calc export of DataExport. Instead of materializing and visiting all blocks now DataExport will export only existing blocks. This means the dynamic calculation will be run for existing blocks only. All sparse dynamic blocks will be skipped. Any FIX on the sparse dynamic calc members will be ignored. The above behavior can be configured using one new option added to the set to existing DataExport options ‘DataExportNonExistingBlocks ON/OFF’. The default value of this option is OFF. By default DataExport will export existing blocks only. If this option is turned ON DataExport will export all potential blocks (combination of sparse members).

The following two INFO messages are added in DataExport usage:

1. DataExportNonExistingBlocks is OFF (default behavior): This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks.

2. DataExportNonExistingBlocks is ON: DataExport Warning: This DataExport operation will export a total of [] blocks. Exporting dynamic calc members from all blocks has significant performance overhead. Use DATAEXPORTNONEXISTINGBLOCKS OFF option to export data from existing blocks only.

 

As the behavior has changed, to export dynamic blocks the following option needs to enabled in the calc script:

DataExportNonExistingBlocks ON;
This should fix the issue.
Thanks,
~KKT~

How To Purge FDMEE Files


Under FDMEE instance, there is data and outbox directory which is getting bigger (80GB). How do we clean up those directories? What is the standard process? There are some some other files such as .ind and .dat direxctly under FDMEE instance. How do we clean those up?

In-order to purge the FDMEE files, there is an option under “workflow” > Go to > Script Execution > Click on System Maintenance Tasks > there you have the scripts to Purge files.

These scripts run based on the application, you need to select the start and the end period along with the category.

This will help you to purge files.

Maintain Application Folder
======================

The Maintain Application Folder process purges files from the inbox, outbox, and data folders directory. Oracle Hyperion Financial Data Quality Management, Enterprise Edition accepts a separate Days to Keep parameter for each of the folders. If the value is not specified for a specific folder, FDMEE skips the folder.

In addition, FDMEE checks the inbox and outbox sub-directories under the respective folders and deletes any files. In the data, FDMEE skips the scripts directory because it holds customer scripts.

Maintain Process Table

This process maintains the following execution tables:

AIF_PROCESSES

  AIF_PROCESS_DETAILS

  AIF_PROCESS_LOGS

  AIF_PROCESS_PARAMETERS

  AIF_PROCESS_PERIODS

  AIF_PROCESS_STEPS

  AIF_BAL_RULE_LOADS

  AIF_BAL_RULE_LOAD_PARAMS

  AIF_BATCH_JOBS

  AIF_BATCH_LOAD_AUDIT

  AIF_TEMP

It accepts the number of days to keep as a parameter.

Thanks,

~KKT~

FDMEE Runs Out of Resources


Dears,

From last month we noticed that FDMEE software has been going down multiple times a day. Below error message we are receiving in the logs

Error Message: weblogic.jdbc.extensions.PoolLimitSQLException: weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool aif_datasource to allocate to applications, please increase the size of the pool and retry..

Increase the connection pool for jdbc source (aif-source) from default 120 to 200 then 300

so we check and Currently the Maximum JVM heap Size is set to a large value, but the minimum JVM heap Size does not match the Max. Follow below steps to update.

A) Update the Min and Max JVM heap Sizes to Match each other (xms 5120 from xms 128 as an example to increase to 5 GB)

Start > Run > RegEdit

HKEY_LOCAL_MACHINE > Software > Hyperion Solutions > ERPintegrator0 > HyS9AIFWeb_epmsystem1

Locate the JVMOption value for -Xmsxxxm

Locate the JVMOption value for -Xmx5120m

Make sure the two match

B) Verify that the AIF_DataSource JDBC Connection Pool Max is set to a large enough value (depends on number of users)

C Verify that the ODIMASTER JDBC Connection Pool Max is set to the same value as the AIF_Datasource JDC Connection Pool

D) Recycle the FDMEE Web Application

Thanks,

~KKT~

Essbase Cloud – Importing and Exporting Bulk Users


Importing and Exporting Bulk Users

In Creating Users and Assigning Roles, you learned about creating a user and assigning a role.

Now you’ll learn how to run bulk imports and exports of files with multiple users and roles.

Importing a Bulk File of Users and Roles

  1. Create a comma-separated Excel file (saved as .csv) that contains the user ID, first and last names (optional), email address (optional), password (optional) and role type (User, Power User, or Service Administrator).
  2. Log into Oracle Analytics Cloud – Essbase as a Service Administrator, server_admin.
  3. On the Applications home page, click Security Image of the Security icon..
  4. On the Security page, click Import.
  5. Browse to the local .csv file created in Step 1, and click Import.
  6. Click Download Logs if you want to view the log file for results of the import. Otherwise, click Cancel.
  7. Click Search to refresh the page and view the users and roles imported to the cloud service from the .csv file.

Exporting Bulk Users and Roles

  1. On the Security page, on the Users tab, click Export.
  2. Save the .csv file to a local directory.
  3. You can open the .csv file in Excel to view the exported user data.

Thanks,

~KKT~

Essbase Cloud – Creating Users and Assigning Roles


Creating Users and Assigning Roles

In this task, you’ll create a user and assign one of the predefined Oracle Analytics Cloud – Essbase roles to that user.

Predefined User-Level Role Description
Service Administrator Can administer the entire cloud service instance, including backing up, creating and deleting applications, provisioning users, and running jobs (such as calculations and data loads).
Power User Can create applications and cubes, and grant access to users to perform actions on those cubes.
User Can access and perform actions on cubes for which access has been granted.
  1. Log into Oracle Analytics Cloud – Essbase using the default Oracle Analytics Cloud administrative user, admin.
  2. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon..
  3. On the Users Image of the Users icon. tab, click Create to add and provision Oracle Analytics Cloud – Essbase users.
  4. On the New User dialog box, provide the following information:
    1. Enter a user ID for a Service Administrator role: server_admin.
    2. Enter the user’s name.
    3. Enter the user’s e-mail address.
    4. Create and confirm a password.
    5. Select the role Service Administrator.
  5. Click Create. The new Service Administrator user is added to the list of users.

Thanks,

~KKT~

OAC Essbase – Security Provisioning Users


Provisioning Users

If you’re a Service Administrator or Power User, you can provision users and groups for applications that you created.

Note:

  • Users and groups are created and managed on the Security page of the cloud service, as described here.
  • You can easily import or export user and group data files using the corresponding buttons.
  • You can’t create users and groups with the same name.

Creating a User

  1. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon..
  2. On the Users Image of the Users tab icon. tab, click Create to add a user.
  3. On the New User dialog box, enter the IDNameEmail, and Password for the user, and then select a Role.
  4. Click Create.

Editing a User

  1. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon..
  2. On the Users Image of the Users icon. tab, select a user row, click Actions Image of the Actions icon, and then select Edit.
  3. On the Edit User dialog box, modify the user details and click Submit.

Deleting a User

  1. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon..
  2. On the Users Image of the Users icon. tab, select a user row, click Actions Image of the Actions icon, and then select Delete.
  3. On the Delete User dialog box, click Yes if you want to confirm the user deletion.

Creating a Group

  1. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon. .
  2. On the Groups Image of the Groups icon. tab, click Create.
  3. On the New Group dialog box, enter the Name and Description for the group, and then select a Role.
  4. Click Create.

Editing a Group

  1. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon..
  2. On the Groups Image of the Groups icon. tab, select a group row, click Actions Image of the Actions icon, and then select Edit.
  3. On the Edit Group dialog box, modify the group details and click Submit.

    The group’s details are modified.

    Note:

    On the Parent Groups tab, the parent of a group can be selected from the available groups that are listed.

Deleting a Group

  1. On the Applications home page, without selecting an application or cube, click Security Image of the Security icon..
  2. On the Groups Image of the Groups icon. tab, select a row for a group, click Actions Image of the Actions icon, and then select Delete.
  3. On the Delete Group dialog box, click Yes if you want to confirm the group deletion.

Thanks,

~KKT~

OAC Essbase – Downloading Server and Application Logs


As a Service Administrator, you can download all logs (server and application). You can download the latest log as well as rolled over logs.

  1. On the Applications home page, click Admin admin icon image without selecting an application.
  2. Expand Server and Applications to see the available logs.
  3. Click Download download icon image and select Download Latest or Download All.

You can download these kinds of server logs:

  • Managed Server: Warnings and error messages regarding Weblogic issues
  • Console Output: Managed server run-time exceptions and messages
  • Platform: User interface
  • Agent
  • Application Provider Service

As an Application Manager, you can download applications logs. You can download the latest log, as well as rolled over logs.

  1. On the Applications home page, select an application, then click Admin admin icon image.
  2. Click Logs image of logs tab.
  3. Select Latest or All, then click Download download icon image.

Thanks,

~KKT~

OAC-Essbase Starting, Stopping and Restarting Services


You can start, stop and restart service from the Oracle Analytics Cloud dashboard.

1. On the Oracle Analytics Cloud dashboard, select Manage the service for service
you want to start, stop or restart.

2. Select the action you want to perform.

3. Select Start to start the service.

4. Select Stop to stop the service.

5. Select Restart to restart the service.

Alternatively, click the icon for Start Service, Stop Service, or Restart Service on
the Overview pane.

Thanks,

~KKT~