You will have to reset the refresh dates for the table you want to re-load. This will force ETL to perform a full load for all the listed and dependent tables, as it thinks no data has been loaded before.
This will reset the refresh dates for all the objects in the execution plan, and force a full load of them, BUT it will still force a full load of any other objects that are dependent on them, too. So, if you have a conformed dimension used by multiple Analysis modules, which you refresh, then ALL the related facts will have full load performed on them.
To find out the list of tables impacted, you need to implement the following steps:
1. Find all the subject areas associated to the particular module in the execution plan. Go to ( Execute –> Execution Plans –> Subject Areas).
2. Once you know all the associated Subject areas, go to the particular container (design tab) from where the tasks of that module exists and query for those subject areas in the subject areas tab. (Design–>Subject Areas)
3. Query each Subject area and find all the associated tables in the ‘Extended tables(RO)’ Subtab. (Design–>Subject Areas–>Extended Tables)
This should give you a list of all fact tables, dimension tables and conformed dimension tables.
Relationship between POWN and DOWN system accounts-
Percentage of Direct Ownership of the Entity can be retrieved from the system account [Shares%Owned]. Direct Ownership (DOWN) is exactly that. For example, entity A directly owns 80% of entity B’s shares.
Percent Ownership (POWN) is not direct, it is a calculated value based on the shares owned by the entity marked as “holding”. If the entity GROUP owns the entity HOLDING, and HOLDING owns shares of entities A, B and C, then the system will calculate how much of A, B and C GROUP owns. That is POWN. The percentage is then used to determine how much of the values in A, B and C should be consolidated in GROUP (the PCON value).
The example below illustrates the relationships between POWN and DOWN.
In metadata, the entity structure shows that the parent Europe has children 1A, 1B, 1C and 1D. 1A is the holding company for Europe
Direct Percentage of Ownership is set up using a Data Grid with the following:
– 1B is owned by 1A 80%
– 1C is owned by 1A 70%
– 1D is owned by 1B 20%
– 1D is owned by 1C 20%
Using Ownership Management, the system calculates Percent Ownership (POWN) for the children of Europe as follows:
– 1A 100%
– 1B 80%
– 1C 70%
– 1D 30%
The calculated POWN is relative to Europe. The value of 30% for entity 1D is calculated as follows:
1B is owned 80%, which owns 20% of 1D 80% * 20% = 16%
1C is owned 70%, which owns 20% of 1D 70% * 20% = 14%
16% + 14% = 30%
In summary, DOWN represents the percentage of direct ownership and POWN represents the ultimate percentage of ownership.
Each process (program) running on the server has the ability to lock for its exclusive use a chunk of swap space or virtual memory.
If too many processes lock too much space, the next attempted lock will fail, and that process will terminate abnormally with a memory allocation error.
Because there are so many variables involved, it is difficult to calculate or predict how much swap space or virtual memory will be the maximum amount needed.
We recommend that virtual memory or swap space be set between two and three times installed physical RAM as a reasonable guess.
For example, if 4 GB RAM is installed in the server, swap space or virtual memory should be set for a minimum of 8 GB, and 12 GB is preferred, if there is sufficient disk space available.
We also recommend that resource-intensive applications, such as relational databases, be moved to separate servers and not be co-located with Essbase.
Another important memory-related metric to track is the amount of memory consumed by each ESSSVR process.
Each ESSSVR process is a running Essbase application, and on 32bit Essbase, none can ever use more than about 2 GB of memory address space (4 GB on Solaris; 1.7 GB on HP-UX).
If an ESSSVR process tries to consume more than this amount, memory allocation errors and perhaps data corruption can occur.
Guidelines in case of memory allocation issues
Reduce the number of databases defined for the application (to one only if possible).
Within the database, reduce the amount of memory allocated for caches.
Reduce the blocksize (recommended 8000 to 64000 bytes).
Oracle BI 12c is now generally available. BI 12c is the strategic foundation of Oracle’s analytics platform, delivering significant new value and lower total cost of ownership.
BI 12c delivers a major update to the Oracle BI platform, with enhancements across the entire platform, as well as new Data Visualization capabilities. Highlights and benefits include:
Easy to upgrade: BI 12c offers a radically simple and robust upgrade from 11g, saving customers time and effort in moving to the new version. BI 12c includes a free utility to automate regression testing, the Baseline Validation Tool, which verifies data, visuals, catalog object metadata, and system-generated SQL across 11g and 12c releases.
Faster: Sophisticated in-memory processing includes BI Server optimizations and support for multiple in-memory data stores, while in-memory Essbase on Exalytics offers enhanced concurrency and scalability, as well as significant performance gains.
Friendlier: Usability updates throughout BI 12c demonstrate Oracle’s continued commitment to making analytics as fast, flexible, and friendly as they are powerful and robust. A new user interface simplifies the layout for the homepage, Answers, and Dashboards, making it easier for users to quickly see what’s important; HTML-5 graphics improve view rendering; and it’s easier for users to create new groups, calculations, and measures, for simpler, more direct interaction with results.
More Visual: A consistent set of Data Visualization capabilities are now available across Oracle BI Cloud Service and Oracle BI 12c, as well as the upcoming Oracle Data Visualization Cloud Service, offering customers a continuity of visual experience unmatched by our competitors. Business users can point and click to upload personal data and blend it with IT-managed data in BI 12c, which automatically infers connections between data sets. Visualizing data is as easy as dragging attributes onto the screen, with optimal visualizations automatically displayed – no modeling or upfront configuration required. Related data is automatically connected, so highlighting data in one visual highlights correlated data in every other visual, immediately showing patterns and revealing new insights. These insights, along with narrative comments, can be shared as interactive visual stories, enabling seamless collaboration that drives fact-based decisions.
More Advanced: Predictive analysis is more tightly integrated, enabling customers to more easily forecast future conditions based on existing data points, group elements that are statistically similar, and expose outliers. BI 12c includes the ability to run the free Oracle R distribution on BI Server, and extend existing analytics with custom R scripts, which can point to any engine (R, Oracle Database, Spark, etc.) without needing to change the BI RPD to deliver results.
More Mobile: Keyword search (“BI Ask”) empowers users to literally talk to their data, asking questions and having visualizations automatically created as responses, opening up an easy entry point for authoring. Additionally, the interface for iOS has been completely redesigned; and Mobile BI for Android offers sharing and following for nearby devices, as well as the ability to project any dashboard or story to GoogleCast-enabled devices.
Bigger Data: BI 12c enables customers to use new data, from local, Oracle, and Big Data sources, including personal files uploaded by users; direct access to data in Hyperion Financial Management and Hyperion Planning applications; and ODBC access to Cloudera Impala. Apache Spark will be accessible via native Spark SQL in an upcoming update.
Higher ROI, Lower TCO: Streamlined administration and life cycle management reduce the time and resources required to manage BI 12c, decreasing costs and increasing value for this and future releases. Enhancements include separating metadata from configuration; simpler, more robust security; easier backup, restore, and disaster recovery; hot patching of metadata; and many more.
I have been trying from long to work on Integration of FDMEE and PBCS to load data as part of automation process. So here you go with complete steps.
Using FDMEE, you must register the source system from which you want to import data. For Oracle Planning and Budgeting Cloud, it is a file-based source system. FDMEE creates a file-based source system automatically when you install and configure the product. You must also register the target application (for example, Planning) to which you want to load the data from one or more source systems.
In this tutorial, we take source data from a data file that we downloaded from an Enterprise Resource Planning (ERP) source ledger and load the data to a Oracle Planning and Budgeting Cloud application.
In Oracle Planning and Budgeting Cloud Service Workspace, select Navigate > Administer > Data Management to launch FDMEE.
On the Setup tab, under Register, select Source System.
On the Data Load Mapping page, define the mappings for the Entity, Version, and Account dimensions.
In this tutorial, you define an Explicit mapping for the Entity dimension to map the source value, 1, to the target entity, Operations.
You define a Like mapping for the Version dimension to map any source value to the target version, BU Version_1.
You also define Like mappings for the Account dimension to map source values to target accounts. For example, all source account values that begin with 14 are mapped to the target account, Inventory. The source data file includes accounts 1410 and 1499 that are both mapped to the account member Inventory in the target application. When you run the data load, the source data for these two members will be summed up in the Inventory account in Oracle Planning and Budgeting Cloud.
Your next step in FDMEE is to create a data load rule for a specific location and category to define what you want to import from the source system. For example, the data load rule defines the file(s) that you want to import in addition to other details. After creating the data load rule, you execute it to import the data for one or more periods. You then verify that the data was imported and transformed correctly, and then export the data to the target application.
Perform the following steps to define and execute the data load rule, and then verify that the data was imported and transformed:
On the Workflow tab, under Data Load, select Data Load Rule.
The Data Load Rule page is displayed with the current POV. In this example, the POV is set to the Operations location, Aug-13 period, and Current category.
Click Add to add a data load rule. In the Details grid, specify the data load rule name, OperationsDLR, and select the data load file name. In this example, you load the file TrialBalAug2013.csv.
Note: If the file information is left blank, the user will be prompted to select a file at run time.
With the rule selected, click Execute. In the Execute Rule dialog box, you define the options for running the rule. In this example, you specify Import from Source, select Aug-13 as the starting and ending period, select the Replace import mode, and click Run.
You only check the box labeled “import from source” so you can look at the data and the mapped results in the workbench prior to loading the data to Oracle Planning and Budgeting Cloud. After everything is confirmed, additional data imports can load to the workbench and export to Oracle Planning and Budgeting Cloud in the same step.
Note: When you import source data, FDMEE imports and also transforms (validates) the data by using the data load mappings that you defined.
An Information dialog box displays your process ID.
Note: To verify the status of your process, you can navigate to Process Details. A green check mark indicates success for your process ID.
Under Data Load, select Data Load Workbench to see the results of the data load and the mapping process in Data Load Workbench.
After a successful load, click Export to load the data to the target application. Select the desired load options, and then click OK.
When complete, the icon will change to indicate a successful load.
Let’s now look at the data that you loaded to the target application. In Oracle Planning and Budgeting Cloud, we open a form to verify that the data was loaded, and then we confirm that we can drill through from the Oracle Planning and Budgeting Cloud form to the source system.
In Oracle Planning and Budgeting Cloud, perform the following steps to verify the data load and the drill through:
Open a form in the Oracle Planning and Budgeting Cloud application that contains the accounts for which you loaded data. The data form displays data for the Operations entity, Aug period, FY13 year, Current scenario, BU Version_1 scenario, and USD currency. The form displays the loaded data.
Right-click on the Inventory data cell, and select Drill Through.
Then click Drill Through to source to start the drill-through process.
The first step in the drill-through process displays the source rows that comprise the Inventory amount in Oracle Planning and Budgeting Cloud. Note that two rows (1410 and 1499) from the source data-load file were loaded to the same account in Oracle Planning and Budgeting Cloud.
From here you can click on an amount to display the source data-load file or drill-through to the source system.
In the Drill Through Details grid, click the amount for the 1499 row, and select Drill Through to Source to drill through to the source system; in this case, Oracle E-Business Suite. Because you included the CCID in the data-load file, you can drill-through to the original data in Oracle E-Business Suite.
The first time you access this source system, you are prompted to enter a user name and password.
This will be prompted to ERP data source with respective code combinations.
Essbase normally maintains correct and synchronized data and index files. Trouble usually arises when Essbase applications are abnormally terminated. When an abnormal termination occurs at the wrong” point during processing, data or index files may not be written properly to disk, resulting in database corruption. This can happen for various reasons:
– a power failure or the server Machine is turned off abruptly
– the server operating system fails or is shut down
– an operator kills an Essbase task on the server
– some outside influence causes a fatal interruption of application processing
– a user attempts to access data within a previously corrupted database
– a bug in Essbase causes the application to crash
Essbase databases contain data files (PAG) and index files (IND). The data files store blocks of data values or cells. Index files identify where in the data files each block of data resides so that during data retrieval, load, or calculation, Essbase can extract the values for processing. Because these files are stored separately, they must remain completely synchronized, or else the index would not reference the correct data blocks.
Under normal circumstances, Essbase ensures these files are correct. Essbase employs a number of techniques internally to minimize the potential for corruption, including a “safe-write” methodology which uses space in the files to store both current and previous copies of changed data. In the event of an abnormal termination, Essbase can usually recover to a prior, known good point in time. Unfortunately, sometimes events occur which leave the files in an unsynchronized state.
If the data and index files are not synchronized, the database is corrupt. The index is the only means for interpreting the data files, so if the index is bad, the data is lost. There is no other utility which can read the data files and recover the data. This underscores the importance of having a good backup strategy.
Depending on the extent of the corruption, some portions of the database may still be accessible. This can be beneficial in recovery efforts, but it also means that users may be working with a corrupted database and not know it. Some time may pass before someone accesses the “bad” spot in the database and discovers the corruption. When this happens, the Essbase application process on the server will usually “crash” and produce an exception log (XCP file). The user will get a message indicating a network error because the server is no longer responding to the client.
In some cases, no one notices these Crash Events. The users simply reconnect, the database is reloaded, and they may continue in this fashion for a long time. The problem is compounded, and sometimes degrades the database to the point where it can no longer even start up. At this point, the only recourse is to recover the files from file system backup.
Here is a sampling of error messages which may indicate a corrupted Essbase database (other, similarly worded messages may also occur):
1006010 Invalid block header: Block’s numbers do not match
1006016 Invalid block header: Illegal block type
1070020 Out of disk space. Cannot create a new index file.
1070026 Corrupted Node Page in the B+tree. [%s] aborted
1005016 Unable to Read Information From Page File
1006002 Unable to Store Information In Page File
1006004 Unable to Read Information From Page File
1080009 Fatal Error Encountered. Please Restart the Server…
1070018 Invalid file handle [%s]. [%s] aborted.
Refer the below URL from Oracle for fix and possible root cause.
Unable to delete or update a partition using either the Essbase Administration Services Console or MaxL.
Getting error: “1241128: Partition does not exist”
This happens with either BSO or ASO partitions.
The issue occurs when using EAS 126.96.36.199.x and higher
This is expected behaviour from EAS
Version 188.8.131.52.x EAS and later now require the port number when connecting to the Essbase server and creating partitions.
By default, if you are running Essbase on the default port (1423), the port number is not added to the Essbase Server name when you add the server in the EAS console.
In version 184.108.40.206.x and upwards you must now also specify the port number as well as the Essbase Server name when adding the server to the EAS Console.
To resolve the issue:
1. Open the EAS Console
2. Right click on Essbase Servers and select ‘Add Essbase Server’
3. Add the Essbase Server in the format: ServerName:PortNumber (for example Localhost:1423)
4. Test to make sure you can connect and can see your applications.
5. Once added you may remove the old Essbase Server by right clicking on it and selecting ‘Remove Essbase Server’
You should now be able to update and delete your partitions.
A newly created partition in Essbase Administration Services (EAS) Console shows up as an Orphaned Partition after upgrading to Essbase v9.3.3 or Essbase v11.1.x or any other version.
Possible cause is When a port number is not specified, code is modified to internally add essbase default port “1423” for handling partitions. This has been verified as unpublished Bug 10178179 – “Partitions are orphaned or repaired if port is not included in the server name”.
As a workaround, in the EAS Console:
1. Drop the created partition.
2. Right Click on Servers Node.
3. Select “Add Essbase Server”.
4. Add Essbase Server Name including Essbase Port Number, e.g. EssbaseServerName:1423
5. Create the partition and use the hostname as created in Step 3 for source and target hostnames in the partition manager.
Essbase Server Name used can be either short or fully qualified domain name.