OBIA 7.9.6.4: How do you Perform a Partial Full Load on OBIA


You will have to reset the refresh dates for the table you want to re-load. This will force ETL to perform a full load for all the listed and dependent tables, as it thinks no data has been loaded before.

This will reset the refresh dates for all the objects in the execution plan, and force a full load of them, BUT it will still force a full load of any other objects that are dependent on them, too.  So, if you have a conformed dimension used by multiple Analysis modules, which you refresh, then ALL the related facts will have full load performed on them.

To find out the list of tables impacted, you need to implement the following steps:

1. Find all the subject areas associated to the particular module in the execution plan. Go to ( Execute –> Execution Plans –> Subject Areas).
2. Once you know all the associated Subject areas, go to the particular container (design tab) from where the tasks of that module exists and query for those subject areas in the subject areas tab. (Design–>Subject Areas)
3. Query each Subject area and find all the associated tables in the ‘Extended tables(RO)’ Subtab. (Design–>Subject Areas–>Extended Tables)

This should give you a list of all fact tables, dimension tables and conformed dimension tables.

 

For details please go through the Oracle Document for reference.

Thanks.

 

HFM – Relationship between DOWN and POWN System Accounts


Relationship between POWN and DOWN system accounts-

 

  • Percentage of Direct Ownership of the Entity can be retrieved from the system account [Shares%Owned]. Direct Ownership (DOWN) is exactly that. For example, entity A directly owns 80% of entity B’s shares.
  • Percent Ownership (POWN) is not direct, it is a calculated value based on the shares owned by the entity marked as “holding”. If the entity GROUP owns the entity HOLDING, and HOLDING owns shares of entities A, B and C, then the system will calculate how much of A, B and C GROUP owns. That is POWN. The percentage is then used to determine how much of the values in  A, B and C should be consolidated in GROUP (the PCON value).
  • The example below illustrates the relationships between POWN and DOWN.
    In metadata, the entity structure shows that the parent Europe has children 1A, 1B, 1C and 1D. 1A is the holding company for Europe

EPMA_DIM

Direct Percentage of Ownership is set up using a Data Grid with the following:
– 1B is owned by 1A 80%
– 1C is owned by 1A 70%
– 1D is owned by 1B 20%
– 1D is owned by 1C 20%

HFM_Grid

Using Ownership Management, the system calculates Percent Ownership (POWN) for the children of Europe as follows:
– 1A 100%
– 1B 80%
– 1C 70%
– 1D 30%

HFM_OwnershipThe calculated POWN is relative to Europe. The value of 30% for entity 1D is calculated as follows:


1B is owned 80%, which owns 20% of 1D      80% * 20% = 16%
1C is owned 70%, which owns 20% of 1D      70% * 20% = 14%

16% + 14% = 30%

In summary, DOWN represents the percentage of direct ownership and POWN represents the ultimate percentage of ownership.

 

Thanks.

 

Essbase – Calculate Swap Space or Virtual Memory Usage


Each process (program) running on the server has the ability to lock for its exclusive use a chunk of swap space or virtual memory.

If too many processes lock too much space, the next attempted lock will fail, and that process will terminate abnormally with a memory allocation error.

Because there are so many variables involved, it is difficult to calculate or predict how much swap space or virtual memory will be the maximum amount needed.
We recommend that virtual memory or swap space be set between two and three times installed physical RAM as a reasonable guess.
For example, if 4 GB RAM is installed in the server, swap space or virtual memory should be set for a minimum of 8 GB, and 12 GB is preferred, if there is sufficient disk space available.

We also recommend that resource-intensive applications, such as relational databases, be moved to separate servers and not be co-located with Essbase.

Another important memory-related metric to track is the amount of memory consumed by each ESSSVR process.
Each ESSSVR process is a running Essbase application, and on 32bit Essbase, none can ever use more than about 2 GB of memory address space (4 GB on Solaris; 1.7 GB on HP-UX).
If an ESSSVR process tries to consume more than this amount, memory allocation errors and perhaps data corruption can occur.

 

 

Guidelines in case of memory allocation issues

  1. Reduce the number of databases defined for the application (to one only if possible).
  2. Within the database, reduce the amount of memory allocated for caches.
  3. Reduce the blocksize (recommended 8000 to 64000 bytes).
  4. Reduce the number of members in the outline.

 

OBIA 11g: Oracle BI Applications 11.1.1.8.1 to 11.1.1.9.2 Upgrade Guide


To upgrade BI Applications 11.1.1.8.1 PB6 to version 11.1.1.9.2 You have to upgrade the following components, repositories (schema and content) and data:

  1. Platform components
  2. BI Applications binaries
  3. BI Applications Component Repository (BIACOMP)
  4. JAZN, RPD and Presentation Catalog
  5. ODI Repository content (BIA_ODIREPO)
  6. Business Analytics Warehouse (DW) – schema
  7. Data Migration of existing data in the Business Analytics Warehouse

 

The sequence of the steps in the upgrade of BI Applications 11.1.1.1.8.1 PB6 to 11.1.1.9.2 is outlined below.

  1. Complete Upgrade Pre-requisites.
  2. Run the BI Applications 11.1.1.9.2 installer to upgrade the BI Application binaries from version 11.1.1.8.1 PB6 to 11.1.1.9.2.
  3. Apply the FMW Middleware Patches for BI Applications 11.1.1.9.2.
  4. Use the PSA tool to upgrade BIACOMP schema (ATGLite, FSM, BIACM and BIACM_IO component upgrades).
  5. Run script to upgrade deployment changes in BI Applications 11.1.1.9.2.
  6. Use the BI Update Metadata Tool to upgrade the JAZN
  7. Upgrade the RPD and Presentation Catalog
  8. Apply Client-Side Patches.
  9. Upgrade the ODI Repository metadata (content).
  10. Upgrade Business Analytics Warehouse schema and Migrate Data in Data Warehouse

Instructions for each step of the upgrade process are provided in attached Upgrade Guide Document – Oracle_BI_Applications_11_1_1_9_2_Upgrade_Guide

 

OBIEE 12c – New Features in OBIEE 12.2.1.0.0


Oracle BI 12c is now generally available. BI 12c is the strategic foundation of Oracle’s analytics platform, delivering significant new value and lower total cost of ownership.

 

What’s New

BI 12c delivers a major update to the Oracle BI platform, with enhancements across the entire platform, as well as new Data Visualization capabilities.  Highlights and benefits include:

  1. Easy to upgrade:  BI 12c offers a radically simple and robust upgrade from 11g, saving customers time and effort in moving to the new version.  BI 12c includes a free utility to automate regression testing, the Baseline Validation Tool, which verifies data, visuals, catalog object metadata, and system-generated SQL across 11g and 12c releases.
  2. Faster:  Sophisticated in-memory processing includes BI Server optimizations and support for multiple in-memory data stores, while in-memory Essbase on Exalytics offers enhanced concurrency and scalability, as well as significant performance gains.
  3. Friendlier:  Usability updates throughout BI 12c demonstrate Oracle’s continued commitment to making analytics as fast, flexible, and friendly as they are powerful and robust.  A new user interface simplifies the layout for the homepage, Answers, and Dashboards, making it easier for users to quickly see what’s important; HTML-5 graphics improve view rendering; and it’s easier for users to create new groups, calculations, and measures, for simpler, more direct interaction with results.
  4. More Visual:  A consistent set of Data Visualization capabilities are now available across Oracle BI Cloud Service and Oracle BI 12c, as well as the upcoming Oracle Data Visualization Cloud Service, offering customers a continuity of visual experience unmatched by our competitors.  Business users can point and click to upload personal data and blend it with IT-managed data in BI 12c, which automatically infers connections between data sets.  Visualizing data is as easy as dragging attributes onto the screen, with optimal visualizations automatically displayed – no modeling or upfront configuration required.  Related data is automatically connected, so highlighting data in one visual highlights correlated data in every other visual, immediately showing patterns and revealing new insights.  These insights, along with narrative comments, can be shared as interactive visual stories, enabling seamless collaboration that drives fact-based decisions.
  5. More Advanced:  Predictive analysis is more tightly integrated, enabling customers to more easily forecast future conditions based on existing data points, group elements that are statistically similar, and expose outliers.  BI 12c includes the ability to run the free Oracle R distribution on BI Server, and extend existing analytics with custom R scripts, which can point to any engine (R, Oracle Database, Spark, etc.) without needing to change the BI RPD to deliver results.
  6. More Mobile:  Keyword search (“BI Ask”) empowers users to literally talk to their data, asking questions and having visualizations automatically created as responses, opening up an easy entry point for authoring.  Additionally, the interface for iOS has been completely redesigned; and Mobile BI for Android offers sharing and following for nearby devices, as well as the ability to project any dashboard or story to GoogleCast-enabled devices.
  7. Bigger Data:  BI 12c enables customers to use new data, from local, Oracle, and Big Data sources, including personal files uploaded by users; direct access to data in Hyperion Financial Management and Hyperion Planning applications; and ODBC access to Cloudera Impala.  Apache Spark will be accessible via native Spark SQL in an upcoming update.
  8. Higher ROI, Lower TCO:  Streamlined administration and life cycle management reduce the time and resources required to manage BI 12c, decreasing costs and increasing value for this and future releases.  Enhancements include separating metadata from configuration; simpler, more robust security; easier backup, restore, and disaster recovery; hot patching of metadata; and many more.

Availability:

BI 12c is available for download from eDelivery.

 

 

Data load in PBCS with FDMEE


I have been trying from long to work on Integration of FDMEE and PBCS to load data as part of automation process. So here you go with complete steps.

Using FDMEE, you must register the source system from which you want to import data. For Oracle Planning and Budgeting Cloud, it is a file-based source system. FDMEE creates a file-based source system automatically when you install and configure the product. You must also register the target application (for example, Planning) to which you want to load the data from one or more source systems.

In this tutorial, we take source data from a data file that we downloaded from an Enterprise Resource Planning (ERP) source ledger and load the data to a Oracle Planning and Budgeting Cloud application.

In Oracle Planning and Budgeting Cloud Service Workspace, select Navigate > Administer > Data Management to launch FDMEE.

Launching FDMEE

On the Setup tab, under Register, select Source System.

The Source System Summary grid displays the File source system.

In the “File : Details” grid you can define the URL for drill through.

File: Details grid

  • Note: You can drill to any system that has an externally available URL.

    Under Register, select Target Application. Confirm that the Oracle Planning and Budgeting Cloud application that you want to load data to is included as a valid target application. If it hasn’t been included, you’ll need to add it. Here we see that the FinApp application is set up as a target application.

    TargetApplication

    In the Application Details grid for the target application, you specify the dimension details and application options. The Dimension Details tab shows the dimensions in the Oracle Planning and Budgeting Cloud application. Among other dimensions, this application includes the Account, Entity, Period, and Years dimensions.

    Application Details grid

    The Application Options tab shows the available options for the Oracle Planning and Budgeting Cloud application. If you set the Drill Region flag to Yes, you’ll be able to drill through to the source file or system.

    Application Options tab

    In FDMEE, you create an import format based on the source type that you want to load to the target application. The import format defines the layout of source data.

    In this tutorial, you define the layout of a source data file that includes fields delimited by a comma. The following columns are included in the file:

    1. Account
    2. Entity
    3. Amount
    4. CCID
    5. Period

    The following screenshot displays a portion of the data file, TrialBalAug2013.csv:

    TrialBalAug2013.csv

    In the import format, you define the location of these columns and map them to dimensions in the target application. In this tutorial, you also define the CCID dimension as an attribute dimension in the import format.

    To create the import format, perform the following steps:

    1. Under Integration Setup, select Import Format.
    2. On the Import Format page, click Add to add an import format. Under Details, enter the import format name (OperationsIF), source system (File), file type (Delimited), file delimiter (Comma), and target application (FinApp). You also specify the URL for drill through.

    alt description here

    In the Mappings grid of the import format, you map the columns in the source data-load file to the dimensions in the target application. You add the CCID source column as an attribute dimension in the import format. By including the CCID in the data-load file and import format, you can drill back to the source environment.
    The currency is not included in the data-load file, but you define the default currency, USD, in the import format

    Application Options tab

    After defining the import format, you create a location and specify the import format for the location. The location combines related mapping and data load rules into a logical grouping.  You also review the period and category for the data load.

    Perform the following steps to add a location and review the period and category definitions:

    1. Under Integration Setup, select Location, and click Add to add a location

    Location grid

    In the Details grid, define the location name (Operations) and the associated import format (OperationsIF). The system automatically populates the Target Application (FinApp) and Source System (File) for you

    Details grid

    Under Integration Setup, select Period Mapping to review the period definitions on the Global Mapping page.

    You define period mappings in FDMEE to map your source system data to Period dimension members in the EPM target application. You can define period mappings at the Global, Application, and Source System levels. In this tutorial, we will load data to the Aug-13 time period.

    Period Mapping

  • Note: If the periods to which you want to load data do not exist, you must create them.

    Under Integration Setup, select Category Mapping to review the category definitions. You define category mappings to map source system data to target Scenario dimension members.
    In this tutorial, you load data into the Current category.

  • Period Mapping

    Note: If the categories to which you want to load data do not exist, you must create them.

    You create data load mappings in FDMEE to map source dimension members to their corresponding target application dimension members. You define the set of mappings for each combination of location, period, and category to which you want to load data.  Perform the following steps to define the data load mappings:

    In FDMEE, click the Workflow tab. Under Data Load, select Data Load Mapping.

    Data Load Mapping

  • The Data Load Mapping page is displayed.

    You must select the Point of View (POV) to define the data load mappings for a specific location, period, and category. At the bottom of the page, click the current location name and define the POV in the Select Point of View dialog box. In this tutorial, you select the Operations location, Aug-13 period, and Current category.

  •           Select Point of View

    On the Data Load Mapping page, define the mappings for the Entity, Version, and Account dimensions.
    In this tutorial, you define an Explicit mapping for the Entity dimension to map the source value, 1, to the target entity, Operations.

    Data Load Mapping page for the Entity dimension

    You define a Like mapping for the Version dimension to map any source value to the target version, BU Version_1.

    Data Load Mapping page for the Version dimension

    You also define Like mappings for the Account dimension to map source values to target accounts. For example, all source account values that begin with 14 are mapped to the target account, Inventory. The source data file includes accounts 1410 and 1499 that are both mapped to the account member Inventory in the target application.  When you run the data load, the source data for these two members will be summed up in the Inventory account in Oracle Planning and Budgeting Cloud.

    Your next step in FDMEE is to create a data load rule for a specific location and category to define what you want to import from the source system. For example, the data load rule defines the file(s) that you want to import in addition to other details. After creating the data load rule, you execute it to import the data for one or more periods. You then verify that the data was imported and transformed correctly, and then export the data to the target application.

    Perform the following steps to define and execute the data load rule, and then verify that the data was imported and transformed:

    On the Workflow tab, under Data Load, select Data Load Rule.

    Workflow tab

    The Data Load Rule page is displayed with the current POV. In this example, the POV is set to the Operations location, Aug-13 period, and Current category.

    Click Add to add a data load rule. In the Details grid, specify the data load rule name, OperationsDLR, and select the data load file name. In this example, you load the file TrialBalAug2013.csv.

    • Note: If the file information is left blank, the user will be prompted to select a file at run time.

     

    Data Load Rule

    With the rule selected, click Execute.  In the Execute Rule dialog box, you define the options for running the rule. In this example, you specify Import from Source, select Aug-13 as the starting and ending period, select the Replace import mode, and click Run.

    You only check the box labeled “import from source” so you can look at the data and the mapped results in the workbench prior to loading the data to Oracle Planning and Budgeting Cloud.  After everything is confirmed, additional data imports can load to the workbench and export to Oracle Planning and Budgeting Cloud in the same step.

    Execute Rule

    Note: When you import source data, FDMEE imports and also transforms (validates) the data by using the data load mappings that you defined.

    An Information dialog box displays your process ID.

    Information

     

    • Note: To verify the status of your process, you can navigate to Process Details. A green check mark indicates success for your process ID.

      Under Data Load, select Data Load Workbench to see the results of the data load and the mapping process in Data Load Workbench.

    Data Load Workbench

     

    • The Load Data tab displays the imported data. At the top of the page, the Import and Validate tasks display a gold fish, which indicate a successful import and validation.

      Note: During validation, FDMEE applies your data load mappings to map source members to target members. When validation errors occur during the Validate task, a Validation tab displays the unmapped source members. You must review the unmapped source members and then, in the Data Load Mapping task, adjust the mappings to correct all errors. After correcting the errors, execute the data load rule again and check the results in Data Load Workbench. You must iterate between adjusting the data load mappings, importing the data, and viewing the results until you are satisfied that they include the rows that you want to load to the target application.

    • After a successful load, click Export to load the data to the target application. Select the desired load options, and then click OK.

    Execute Rule

    When complete, the icon will change to indicate a successful load.

    Information

    Let’s now look at the data that you loaded to the target application. In Oracle Planning and Budgeting Cloud, we open a form to verify that the data was loaded, and then we confirm that we can drill through from the Oracle Planning and Budgeting Cloud form to the source system.
    In Oracle Planning and Budgeting Cloud, perform the following steps to verify the data load and the drill through:

    Open a form in the Oracle Planning and Budgeting Cloud application that contains the accounts for which you loaded data. The data form displays data for the Operations entity, Aug period, FY13 year, Current scenario, BU Version_1 scenario, and USD currency. The form displays the loaded data.

    FinApp - TrialBal tab

     

    • You also notice an icon in the upper-right corner of the data cells indicating that you can drill on the cells to navigate back to the source.

      Right-click on the Inventory data cell, and select Drill Through.

    Drill Through

    Then click Drill Through to source to start the drill-through process.

    Drill Through to source

    The first step in the drill-through process displays the source rows that comprise the Inventory amount in Oracle Planning and Budgeting Cloud.  Note that two rows (1410 and 1499) from the source data-load file were loaded to the same account in Oracle Planning and Budgeting Cloud.

    Drill Through Summary

     

    • From here you can click on an amount to display the source data-load file or drill-through to the source system.

      In the Drill Through Details grid, click the amount for the 1499 row, and select Drill Through to Source to drill through to the source system; in this case, Oracle E-Business Suite. Because you included the CCID in the data-load file, you can drill-through to the original data in Oracle E-Business Suite.

    Drill Through Details

    The first time you access this source system, you are prompted to enter a user name and password.

    This will be prompted to ERP data source with respective code combinations.

     

    That’s it . Queries and comments awaited.

    Thanks.

     

     

     

     

    Essbase error – Illegal Block Header !!!


    Essbase normally maintains correct and synchronized data and index files. Trouble usually arises when Essbase applications are abnormally terminated. When an abnormal termination occurs at the wrong” point during processing, data or index files may not be written properly to disk, resulting in database corruption. This can happen for various reasons:

    – a power failure or the server Machine is turned off abruptly
    – the server operating system fails or is shut down
    – an operator kills an Essbase task on the server
    – some outside influence causes a fatal interruption of application processing
    – a user attempts to access data within a previously corrupted database
    – a bug in Essbase causes the application to crash

    Essbase databases contain data files (PAG) and index files (IND). The data files store blocks of data values or cells. Index files identify where in the data files each block of data resides so that during data retrieval, load, or calculation, Essbase can extract the values for processing. Because these files are stored separately, they must remain completely synchronized, or else the index would not reference the correct data blocks.

    Under normal circumstances, Essbase ensures these files are correct. Essbase employs a number of techniques internally to minimize the potential for corruption, including a “safe-write” methodology which uses space in the files to store both current and previous copies of changed data. In the event of an abnormal termination, Essbase can usually recover to a prior, known good point in time. Unfortunately, sometimes events occur which leave the files in an unsynchronized state.

    If the data and index files are not synchronized, the database is corrupt. The index is the only means for interpreting the data files, so if the index is bad, the data is lost. There is no other utility which can read the data files and recover the data. This underscores the importance of having a good backup strategy.

    Depending on the extent of the corruption, some portions of the database may still be accessible. This can be beneficial in recovery efforts, but it also means that users may be working with a corrupted database and not know it. Some time may pass before someone accesses the “bad” spot in the database and discovers the corruption. When this happens, the Essbase application process on the server will usually “crash” and produce an exception log (XCP file). The user will get a message indicating a network error because the server is no longer responding to the client.

    In some cases, no one notices these Crash Events. The users simply reconnect, the database is reloaded, and they may continue in this fashion for a long time. The problem is compounded, and sometimes degrades the database to the point where it can no longer even start up. At this point, the only recourse is to recover the files from file system backup.

    Here is a sampling of error messages which may indicate a corrupted Essbase database (other, similarly worded messages may also occur):

    1006010 Invalid block header: Block’s numbers do not match
    1006016 Invalid block header: Illegal block type
    1070020 Out of disk space. Cannot create a new index file.
    1070026 Corrupted Node Page in the B+tree. [%s] aborted
    1005016 Unable to Read Information From Page File
    1006002 Unable to Store Information In Page File
    1006004 Unable to Read Information From Page File
    1080009 Fatal Error Encountered. Please Restart the Server…
    1070018 Invalid file handle [%s]. [%s] aborted.

    Refer the below URL from Oracle for fix and possible root cause.

    FIX and rootcause.

     

    Hyperion Planning -Capture SQL Transactions in Log File


    In Hyperion Planning go to:

    Administration > Manage Properties > System Properties

    Add a new parameter called PROFILING_ENABLED and set it to “true”

    Also, if not already there add the parameter DEBUG_ENABLED and set this to “true”

    Restart the Planning service for the changes to take effect. The additional logging should be recorded in the following logs:

    EPM_INSTANCE_HOME\diagnostics\logs\services\HyS9Planning-sysout.log
    EPM_INSTANCE_HOME\diagnostics\logs\services\HyS9Planning-syserr.log

     

    Essbase – Unable to Delete or Update a Partition


    Unable to delete or update a partition using either the Essbase Administration Services Console or MaxL.

    Getting error: “1241128: Partition does not exist

    This happens with either BSO or ASO partitions.

    The issue occurs when using EAS 11.1.1.3.x and higher

    This is expected behaviour from EAS

    Version 11.1.1.3.x EAS and later now require the port number when connecting to the Essbase server and creating partitions.

    By default, if you are running Essbase on the default port (1423), the port number is not added to the Essbase Server name when you add the server in the EAS console.

    In version 11.1.1.3.x and upwards you must now also specify the port number as well as the Essbase Server name when adding the server to the EAS Console.

    To resolve the issue:
    1. Open the EAS Console
    2. Right click on Essbase Servers and select ‘Add Essbase Server’
    3. Add the Essbase Server in the format: ServerName:PortNumber (for example Localhost:1423)
    4. Test to make sure you can connect and can see your applications.
    5. Once added you may remove the old Essbase Server by right clicking on it and selecting ‘Remove Essbase Server’

    You should now be able to update and delete your partitions.

    Essbase – Partitions Appear Only As Orphaned Partitions After Upgrading


    A newly created partition in Essbase Administration Services (EAS) Console shows up as an Orphaned Partition after upgrading to Essbase v9.3.3 or Essbase v11.1.x or any other version.

    Possible cause is When a port number is not specified, code is modified to internally add essbase default port “1423” for handling partitions. This has been verified as unpublished Bug 10178179 –  “Partitions are orphaned or repaired if port is not included in the server name”.

    As a workaround, in the EAS Console:

    1. Drop the created partition.
    2. Right Click on Servers Node.
    3. Select “Add Essbase Server”.
    4. Add Essbase Server Name including Essbase Port Number, e.g. EssbaseServerName:1423
    5. Create the partition and use the hostname as created in Step 3 for source and target hostnames in the partition manager.

    Essbase Server Name used can be either short or fully qualified domain name.