Dynamic Integrations for Multiple Hyperion Planning Applications

The Journey to Dynamic Hyperion Planning ODI Integration

Increasingly, organizations are investing in a global EPM environment that centralizes all information and gives users more analytic power. The growing need for quick access to information drove Dell to create a new EPM environment with a faster and more reliable infrastructure, and with fewer maintenance costs.

The project objective was to replace the old worldwide forecast application with one that better reflects the new direction of the enterprise and accommodates existing regional applications. Analyzing the impact that this replacement would cause in the old ODI interfaces, Dell realized that the changes needed to accommodate the new application were so extensive that creating a multiple planning application development structure was justified.

The main challenge was creating a merged application that was connected with the regional applications. The old applications were split (one per region), so the key to project success was a metadata load process responsible for orchestrating all the applications.

This project also showed us how rigid and fragile the default Hyperion Planning metadata load process is, using ODI for maintenance changes and new application development. A large company cannot rely on such a rigid structure, which does not allow for rapid changes in direction or new information needs. This scenario drove us not only to create new ODI interfaces to maintain the new application, but also to create a new entire structure that is fast, flexible, reliable and dynamic enough to support any number of new applications and changes—with minimal development cost and time.

To gain a better understanding about this new structure, we will need to see how the default ODI development model works behind the scenes.

Default Hyperion Planning Metadata Load Using ODI: The Beginning

Oracle Hyperion Planning is a centralized Excel- and Web-based planning, budgeting and forecasting solution that integrates financial and operational planning processes and improves business predictability. A Hyperion Planning application is based in “dimensions”—basically, the data category used to organize business data for retrieval and preservation of values. Dimensions usually contain hierarchies of related members grouped within them. For example, a Year dimension often includes members for each time period, such as quarters and months.

In a Planning application, “metadata” means all the members in a dimension and all its properties. These properties must be created manually or loaded from external sources using a metadata load method; the best way to load metadata into Planning is to use ODI.

ODI is a fully unified solution for building, deploying, and managing real-time data-centric architectures in a service-oriented architecture (SOA), Business Intelligence (BI), and data warehouse environment. It combines all the elements of data integration, real-time data movement, transformation, synchronization, data quality, data management, and data services to ensure that information is timely, accurate, and consistent across complex systems.

ODI uses models and data stores to get descriptive information about the systems and its contents. In ODI, “model” signifies a relational data model and represents the structure of a number of interconnected data stores in a single schema on a particular technology.

Models and data stores normally help speed up development time, unless the target technology is Hyperion Planning, and mainly if we are talking about metadata load. Metadata is often described as “information about data.” More precisely, metadata is the description of the data itself, its purpose, how it is used, and how the systems manage it.

Metadata load for Hyperion Planning using ODI can be simple if we are dealing with only one application and a few dimensions. But it becomes extremely complex and hard to maintain if we have an environment with multiple Hyperion Planning applications with several different dimensions in each application. To exemplify this complexity, let’s begin with the default metadata load process in ODI for one application that has eight dimensions and two attribute dimensions:

Figure 1 shows us how to configure a classic Hyperion Planning metadata load using ODI.

  • Step 1: Set up the Hyperion Planning connection in ODI using a logical and a physical schema.
    • Physical schemas: Objects created in ODI that contain the connection information of a given data server. This information includes connection user/password, Java driver, database URL, etc.
    • Logical schemas: Objects created in ODI that logically represent one or more physical schema. In ODI development, all references to a database object are done through a logical schema, not a physical schema. This distinction between logical and physical schemas is used in ODI to differentiate what a database means in a business perspective (logical) and in a technical perspective (physical). It also allows us to use the same logical schema with different physical schemas, depending on the execution context. “Context” in ODI is the object that binds one logical schema to one physical schema.
  • Step 2: Create the data store objects. In ODI, a data store object represents one database object in a given technology. When we work with Oracle it means a table; in Hyperion Planning, it represents a dimension or an attribute. A dimension/attribute data store object in ODI contains all metadata information that must be loaded into Hyperion Planning, such as member/parent name, consolidation type, aggregation, plan type, etc.
  • Step 3: Create an interface object. An interface is used to build the mapping between the source data store and the target data store. It has all the information and configurations that are needed to perform our metadata load, including joins, filters and data transformation.

Now the interface is ready to load metadata to Planning dimensions. But what does ODI do behind the scenes? This knowledge is key to understanding the concept of dynamic development; Figure 2 depicts this process.

To perform any task in ODI, we need to provide a proper Knowledge Module (KM). KM is a template containing the necessary code to implement a particular data integration task in a particular technology. These tasks include loading data, checking it for errors, or setting up triggers necessary to implement journalization. All KMs basically work the same way: ODI uses them to generate code, which then is executed by a technology at run time.

In our case, ODI uses a Planning Integration KM to translate the data store information; it uses ODI API commands to generate a SQL query that will be executed in a Java procedure to load the metadata to Hyperion Planning. This might seem complicated, but actually it’s pretty simple.

A data store contains all necessary information to load source data to a target system. In our case, this source data is an Oracle table with metadata information about account dimension: member name, parent name, alias, data storage, etc. On the target side is the Planning data store that represents the account dimension and contains all information needed to describe a member and its functionality inside the application: two pass calculation, plan type, aggregation, etc.

In any planning data store, the operation column is important and accepts four different values:

  • Update: Adds, updates or move the members
  • Delete Descendants: Deletes all descendants but the member itself
  • Delete Idescendants: Deletes all descendants including the member itself
  • Delete Level 0 members: Delete all leaf members of that particular member

The Planning Integration KM contains a set of steps that perform different tasks needed to load metadata into planning, such as getting connection information, reporting load statistics and, most importantly: “Load data into planning.” This step generates the mapping code between the source and target data stores using ODI API commands (e.g., “odiRef.getColList” returns all mapped columns from the target data store, and “odiRef.getFrom” returns the source table name).

Together, these commands generate a SQL query that will retrieve data from the source tables in the correct format that is comprehensive to Planning.With this information, ODI gathers the data and uses Java to connect into Planning in order to load, move or delete metadata in the Account dimension.

What if we want to maintain a second dimension, such as Entity? The KM would be the same, the source could be the same table (with different filters, for example) or a completely new source table, but the target data store would definitely be different; because each dimension is a separate data store in ODI, our only option is to create a new interface to load any other dimension.

Figure 3 shows what happens in a normal development environment, where we end up with several interfaces per application because of the dependence between interface and dimension. There will also be cases where more than one interface will be needed for each dimension. If we want to do something more complex—like move an attribute from one parent to another, load the members in the right order, or move shared members instead of creating new shared members—we will end up with two or even three interfaces per dimension. Let’s do the math: an environment with 5 Hyperion Planning applications with an average of 15 interfaces per app would give us something like 75 interfaces to code and maintain….That’s a lot of interfaces, which means more development time and higher maintenance costs. This was the biggest motivation for us to concept and develop dynamic integrations for Hyperion Planning applications.


We Have a Problem. Now What?


Developing a large number of interfaces takes a lot of time and consumes a lot of human resources—and it only gets worse when we talk about code maintenance. If something that impacts all interfaces changes, we need to replicate the fix to all the interfaces, test all logics again and deploy them in every environment. This process could be very timing consuming and error prone. Table 1, below, illustrates the difficulties encountered in a classic development and the actions that we need to take to build a smarter and more flexible solution.


Difficulties in a classic Hyperion/ODI development Actions to achieve a smart/flexible solution
Metadata load to Hyperion Planning is limited to the creation of one interface per dimension/application, generating a large volume of interfaces. Create one generic process with the smallest possible number of interfaces that can be used in any number of applications and dimensions.
ODI data stores are tied to only one dimension and one application. Replace ODI data stores with a more flexible solution that is independent of the application/dimension being loaded.
ODI KMs work with only one application at a time and depend on the target data store to know which dimension is being loaded. Upgrade ODI KMs to have dynamic target applications and dimensions data stores.
Metadata information generally comes from multiple sources with different data formats. Create a standard generic inbound table for metadata, with all necessary columns for loading metadata to Hyperion Planning and correctly formatted data.
Each operation that must be done to a member in Hyperion Planning, despite moving (e.g., deleting) it, requires creating a separate interface. Create generic components to handle different metadata situations (e.g., attributes changing parents, share member load, load members in correct order, etc.).
Full source tables are read during every metadata load—which causes poor performance and may lead to a shared member creation instead of a shared member movement. Load only the metadata that has changed without impacting any hierarchy order or behavior.

Table 1: Problem Solution Table

As we can see, there are a lot of interesting and difficult points to be covered in a generic metadata load process, but each of those points has a solution. Assembling all those ideas gives us a smart and flexible process that is independent of the number of applications/dimensions.

To achieve our goal, we will need:

  • A standard metadata inbound table that can be used to load any dimension and application independently
  • A similar table to extract all information in a Hyperion Planning application
  • A third table to compare our inbound and extract metadata tables to create a delta between them containing only the members that must be loaded, increasing the load performance
  • A smart load process that understands what was changed and executes all metadata situations (e.g., delete, move or update)
  • A load component that dynamically builds its own data store information that will be used in ODI to load any application/dimension

Believe it or not, this process exists, and each part of it will be explained in detail below. It all starts with having the right table structure….

Preparing to Load: Gathering the Data

First things first: we need a smart process that identifies the metadata before loading it into Planning. For this, we must classify the metadata in the diverse possible categories before the load phase, creating a delta between the data coming from the diverse system of records and the Planning application itself. This delta is known as a “metadata tie out process,” and requires easy access to the new source metadata and the existing target metadata.

Inbound Process

To load any metadata into Hyperion Planning, ODI needs a set of dimension-specific information that describes how that member will behave inside the application. For example, we need to set up a “Time Balance” behavior to load an account member, and when we load a dimension that has an attribute, its value must be loaded together with the dimension member. Each dimension has its own particularity and that is why ODI needs one data store per planning dimension/application, as the columns in each data store are different. The source tables for each dimension will also probably be different, making it impossible for a generic   load process to be aware of all possible inbound table sources and columns.

To standardize our metadata load process, we need a standard inbound table (as shown in Table 2) that can be used to load any dimension and application. An inbound table is a merge of all possible columns necessary to load all Planning dimensions of all existing Planning applications of our environment. It also contains some extra columns that are used over the generic metadata load process:

  • APP_NAME: identifies to which app that member belongs, and allows multiple applications to be loaded at the same time by the same generic process
  • HIER_NAME: indicates to which dimension that member belongs, allowing multiple dimensions in one single inbound table
  • DIM_TYPE: contains the dimension type, like ENTITY, ACCOUNT, and ATTRIBUTE DIMENSION, which is used in the generic process to decide what to do depending on the dimension type
  • POSITION: identifies the order in which that member should be loaded into the dimension hierarchy
  • GENERATION: contains the generation number of that member inside the dimension hierarchy
  • HAS_CHILDREN: indicates if that member has children or not

Having this table means that we create a unique data layer where all source metadata will reside in a correct pattern that Planning understands, and that the generic process reads from one standard table, not from a lot of different places.

This metadata source table can be loaded by any number of different sources, such as Oracle tables, flat files, Oracle’s Data Relationship Manager (DRM), etc. Each of these processes—which are considered to be external processes to the generic metadata load process—requires loading all necessary information about that member in a correct Planning format. Each of these external processes is unique and may contain its own business logic. The important thing is that all of them will end up populating the necessary metadata information in one generic inbound table that will be used in the generic load process. Having a central   unique table for metadata also centralizes all data quality verification over the metadata itself, ensuring that only valid information goes to Planning.

Extract Process

Now that we have a standard inbound table, we will need a standard extract table that will contain all metadata that already exists in the Hyperion Planning applications and that will be compared with our inbound table information to create the delta tie out process. This table will have the same structure as the inbound table; the only difference will be the data that it will store. To populate our extract table we need to extract the existing metadata that resides in all Hyperion Planning applications—but there’s a problem: ODI doesn’t have a proper KM to extract metadata from Planning. To solve this issue, a SQL query is   used to extract the dimension information from the Planning application repository itself. We could use an alternative—the ODI KM that extracts metadata from Essbase—but it takes much more time, and we also need one interface per dimension.

To extract metadata from Planning we will need some SQL and Planning application repository knowledge. Before we can create a new Planning application, we require a database to store all the Planning repository tables. For all the following SQL work described in this article, we will need an Oracle 11g database (because we use some features available only in this version of the database). If the Planning application is not stored in an Oracle 11g database, it’s still possible to use what will be described here, but it will be necessary to copy the data to temporary tables in an Oracle 11g database first or to adapt the code   for a different version of Oracle or other type of database.

A Planning application repository has a central table that stores every object that you create in the application. This table, named HSP_OBJECT, is the key source for extracting metadata from the applications. This table provides all information needed for the extract process and we can easily extract any necessary dimension information. In this table we have six important columns:

  • OBJECT_NAME: stores the object name
  • OBJECT_TYPE: stores the type of the member (Shared Member, Version, Entity, Time Period, Cube, Currency, Member, Planning Unit, Year, Attribute Dimension, Dimension, Alias, Folder, Form, Attribute Member, Group, User, Account, Calendar, Scenario, Currency, FX Table and User Defined Dimension Member)
  • OBJECT_ID: the ID of the object
  • PARENT_ID: the ID of the object’s parent
  • GENERATION: the generation to which the object belongs
  • HAS_CHILDREN: whether that object has children

Notice that this table is based on a parent/child relationship. This type of relation is perfect when we want to store a hierarchy inside a table. This way, to create a table, we don’t need to know how many generations one hierarchy will have—we only need two columns with the parent and child IDs to rebuild that correlation. To achieve this, the Oracle database gives us a very useful command: Connect by Prior … Start With. The Connect by Prior … Start With clause can be used to select data that has a hierarchical relationship: use of Prior word creates arecursive condition. In short, here's what we need to   do:

This command will retrieve the Account Dimension hierarchy. If we need to rebuild the metadata from another dimension, the only thing that we need to do is change the OBJECT_NAME to the desired dimension name (e.g., Entity). This query will be the core of our extract process. Now we have the hierarchy built, but this is not enough—we must have all information about the members in order to compare to our inbound table. We also need to take the other metadata information from the planning repository. Basically, we have to join all of the following tables together to have everything that we need. Table 3 has the mapping of the   tables and the information that you can find in the application repository.


Repository Table Name Extract Table
HSP_MEMBER Data Storage
Data Type
Two Pass Calculation
Aggregation (Plan1)
Aggregation (Plan2)
Aggregation (Plan3)
Aggregation (Wrkforce)
Aggregation (Capex)
HSP_ALIAS Alias: Default
HSP_STRINGS Description
HSP_ACCOUNT Account Type
Time Balance
Skip Value
Exchange Rate Type
Variance Reporting
HSP_PLAN_TYPE Plan Type (Plan1)
Plan Type (Plan2)
Plan Type (Plan3)
Plan Type (Wrkforce)
Plan Type (Capex)
Source Plan Type
HSP_ENTITY Base Currency
HSP_MEMBER_TO_ATTRIBUTE Associated Attributes

Table 3: Planning Extract Table Mapping

Important: When you build the final extraction query from the Planning repository:

  1. Remember that the HSP_OBJECT table contains all the metadata information regarding all the objects in Planning. We need to join almost all the above tables with themselves again to get the name of that particular object. For example: HSP_ALIAS contains only the IDs to the alias; the alias itself is stored in HSP_OBJECT.
  2. Remember to make LEFT JOINS to all those tables against HSP_OBJECT. Depending on the dimension type, we will not have anything stored in some tables. For example, Account members do not have data stored in the HSP_ENTITY table.

With this query ready, all we need to do is loop it, passing the dimension name to the core query mentioned above; all Planning dimensions will be extracted. Normally, we learn to loop in ODI using a count variable and a check variable that determines if the loop got to the end of the cycle. Then a procedure or a package is called over each loop interaction. There is nothing wrong with this approach, but it generates more variables and a bigger flow inside the ODI package.

Thankfully, we have a much easier way to create loops in ODI: “Command on Source” and “Command on Target.” This enables us to execute a command in the target tab based on the command on source; in other words, the command in the target will be executed for each row that returns from the query in the source tab. Basically, the source query will be a cursor and the target query will be the “Loop” clause in an analogy to PL/SQL. We can also pass information that returns in the source tab query to the target tab command, enabling us to change the content that will be executed in the target dynamically.

With this concept, we can create much simpler loops. We can add the query that will return all dimensions that we want to loop in the “Command on Source.” We can get this information easily in the application repository itself.

This query will return all the dimensions that exist in one planning application, and the only thing left is to insert in the “Command on Target” the query to extract the data from the Planning application and then pass the list of dimensions from the “Command on Source” tab. To do this, we use the column name or the alias created in the source query as an ODI variable to the target query:

This will repeat for each row returned from the source query, allowing us to extract all metadata information from all dimensions. The Command on Target query in Figure 6 shows us an example of how to get HSP_OBJECT information. To get the entire list of needed information, we use a query that joins all tables described in Table 3. It is also worth mentioning that this loop method works for every kind of looping in ODI, minimizing the number of created ODI objects.

What a formidable deed we have done: extract, with only two queries, all the metadata from a Planning application. But this is not enough. As the title of this article suggests, we must do that for multiple applications, and for that we will need only to use the same loop approach again for each existing application.

Since each Planning application has its own repository, we need to grant the ODI user “Select” access, which connects into the Oracle database to have a maximum code reutilization. With the ODI user having access to all Planning application repository tables, all that we need to do to extract all dimensions from all Planning applications is:

  • Encapsulate the procedure created to extract the application dimension metadata in an ODI scenario
  • Create another procedure to loop the above scenario, passing the application name and the application schema in the Oracle database

How does it work? The procedure will be set with a query in the “Command on Source” tab that will return all the application names that we need to loop, and all the schema names for each application. This can be achieved by populating a parameter table with the name and the schema for that application or we can use the Planning repository to get this information.

The Planning repository has some tables that store information regarding its applications, such as: configurations, data source information, applications names and database schemas. In the HSPSYS_DATASOURCE table we find all the information regarding the data source created to build the planning application, as well as the database schema used to set up the application repository; in the HSPSYS_APPLICATION table we find the name of the planning applications. When we join those tables, we get all the application names and database schemas existing in the Planning environment.

The only thing missing is to set the “Command on Target” tab with the “OdiStartScen” command. This ODI command is used to call an ODI scenario; if we use it with the loop approach we can call that scenario as many times as we need and the only code change in the Figure 6 queries that we need to do is as below:

Figure 8 shows us how to extract from all Hyperion Planning applications and from all existing dimensions. This flow works as the following:

  • The Main Scenario executes the first “Command on Source” query that will return all Planning application names that exist in the environment together with database schemas
  • For each line returned from the “Command on Source,” an ODI scenario will be called, passing as parameters the application name and the database schema to the Extract Scenario
  • Inside the Extract Scenario, it will execute the “Command on Source” query to get all existing dimensions from the input planning application/schema
  • For each dimension returned from the “Command on Source,” an extraction query will be executed, retrieving all the necessary information to load the extract tie out table

At the end of the process, we will have loaded the extract table with all existing metadata from all Planning applications and dimensions. This table will be used in the next step, where we will compare each metadata member against the source metadata and decide what to do with it.

Benefits of the Metadata Tie Out Process

Now that we have inbound and extract tables with all metadata from source and target systems, we need to compare them and decide what to do with each metadata member. For this tie out process, we created the metadata tie out table; it merges inbound and extract tables, and contains all source and target columns along with a prefix identifying each one of them and a column called CONDITION.This extra column is used to describe what the metadata load process should do with that particular member. It is important for this table to have all source and target columns, because then we can actually see what has changed from source to   target metadata for each member.

The metadata tie out process will read both source and extract tables and populate the metadata tie out table with all source, extract and CONDITION information. The tie out process has built-in intelligence that analyzes several different load situations for each member and populates the final result in the CONDITION column. The process always searches for a parent/member/application/dimension combination in the source table and matches it to the parent/member/application/dimension on the target table. The process uses this combination because this is the information that represents a unique member in Planning.

Here are the possible CONDITION statuses created by the tie out process:

CONDITION status When it happens
Match All metadata information from the inbound source table is equal to the extract table information. No further action is needed.
No Match Any column from the inbound source table is not equal to the extract table information. This member must be updated in the target Planning application.
Exists only in Source A new member exists only in the inbound source metadata table. It must be loaded to the Planning application.
Exists Only in Application A member was deleted on the source system but remains in the Planning application. Create a “Deleted Hierarchy” member and move the deleted members under it. The process doesn’t physically delete the member to keep the data associated with it intact.
Moved Member A member moves from one parent to the other. It must be updated in the Planning application.
Changed Attribute member One attribute is moved from its parent to another parent.
Reorder Sibling Members A new member must be inserted where another member previously belonged, or a member has changed place order with one of its siblings.
Deleted Shared Members A shared member stops existing in the inbound table and must be deleted from the Planning application.

Table 4: Condition Types

The first four conditions are achieved by a “Full Outer Join” between the inbound and the extract tables and a “Case When” to define the CONDITION column, as demonstrated in Figure 9, below:

This query compares all metadata columns in the source and extract tables to see what has changed, and adds to the CONDITION column what the load process should do with that row afterwards. For the other four conditions, we need to work with the data just created by the Figure 9 query.

  • Moved Member: When we execute the query from Figure 9 we get an unexpected behavior regarding moved members. A moved member is a member that changed from one parent to another. Since the query compares the member and parent names to decide if the member is new, modified or deleted, it will consider the source member as a new member (because it has a new parent) and the extracted member as a deleted member (because its parent/member combination does not exist in the source)—thus generating two rows in the tie out table instead of one. To solve this issue, the tie out process merges those two rows into a single one. This merge happens for all multiple rows that have the same member name but one with “Existing only in Source” condition and another one with “Exists only in the Application” condition.
  • Changed Attribute Member: Attribute members require a special logic because Hyperion Planning treats them differently. When you want to move an attribute member from one parent to another, you first need to delete the member and then insert it back in the new parent. So this is a two-step operation, instead of the normal “move member” operation. When the process deletes the attribute, Hyperion Planning automatically removes its value from its associated dimension member. If we don’t load the associated dimension members again, their attribute values will be missing at the end of the metadata load process. To solve this issue, the metadata tie out process searches for every dimension member that has a moved attribute associated with it and changes its condition to NO_MATCH. This will guarantee that, after moving the attribute to a new parent, the process also loads all the dimension members again with its attribute values. Another particularity with attributes is that if an attribute no longer exists in the source system, it is deleted from the planning application. It is not moved to a deleted hierarchy because no data is associated directly with the attribute member, and thus no data is lost.
  • Reorder Sibling Members: When a single member is added to an existing parent member and this parent member has other child members, Planning adds the new member to the end of the list. Hyperion Planning doesn’t have enough information to know in which order to insert this new member because it does not have its sibling’s orders to compare to it; the tie out process thus searches for all existing siblings of the new member and marks them as NO_MATCH to indicate that they should be loaded together. This way, Hyperion Planning will load all sibling members in the correct order.
  • Deleted Shared Members: If a shared member ceases to exist in the source metadata, it is removed completely from the Planning application. There is no reason to move it to a deleted hierarchy member because no data is associated directly with it.

When the tie out process finishes populating the metadata tie out table, we will have all information we need to load only the necessary members to Planning. Because this table is centralized and has all applications and dimensions in it, it is just a matter of looping it for every application and dimension that needs to be loaded by the generic load component. To accomplish this, we must tweak the ODI KMs and procedures to make things more generic.

Loading a Dynamic Application

In order to create a process that can load any application and dimension using a single ODI interface, we must make some code changes to the KM that is responsible for loading metadata into Hyperion Planning.

But first, we need to understand the ODI concept of a KM. KM is a set of instructions that will take the information from the source and target data stores of an ODI interface and construct a SQL command based on those data stores. In a nutshell, the ODI KM is code generator based in the information that you set in the interfaces, data stores, topology, and so on.

As we know, the default Hyperion Integration KM can load only one application and dimension at a time because a target data store is needed for each dimension in each application. If we look at what the KM does behind the scenes, we will see something like this:

Basically, the KM translates the Planning application data store to a SQL query, and, as we know, we get this data store by reversing a Planning application inside ODI. Fair enough, but this also means that if we could somehow have the same information that ODI has to reverse this application dimension to a data store, we could easily end up with the same SQL created from that data store information. As we’ve demonstrated, we have the Planning application repository itself, where all the information about a Hyperion application is stored. If we read this information, we’ll get the same information provided by the ODI data store.

Knowing this, the only thing left is to change the default KM according to our needs, and for this we need to make three changes:

  • Make the application name that is going to be loaded dynamic
  • Make the dimension name that is going to be loaded dynamic
  • Change the way that the KM builds the SQL command that will load metadata to Hyperion Planning. Currently, it builds this command based on the source and target data stores and the interface mappings

Figure 11 demonstrates how a default planning integration KM works. It has two main steps: “Prepare for loading” and “Load data into planning.” The first step sets all information regarding connections, log paths, load options, etc. The second step retrieves all source data based on the interface mapping and the source/target data store and loads it to planning. In our case, the application and dimension names reside in the first step and the SQL command resides in the second step so we already know where we need to change the code.

But we need to analyze further to know what exactly we need to change. ODI gets the application name from the <%=snpRef.getInfo("DEST_CATALOG")%> API function; this function returns the application name in the destination target store, which is connected to a logical schema. This logical schema finally resolves into a physical schema that contains the application name itself. If we change it to an ODI variable, we can encapsulate this interface into an ODI package and loop it, passing the application name as a parameter. This makes it independent of the target data store topology information and lets us load   any Hyperion Planning application using a single interface.

The dimension name follows the same logic: ODI gets it from the <%=snpRef.getTargetTable("RES_NAME")%> API function, which returns the resource name from the target data store that, in this case, is the dimension name itself. Again, if we change it to an ODI variable, we can encapsulate this interface into an ODI package and loop it, passing the dimension name as a parameter, making it independent of the target data store resource name, and enabling us to load any dimension with one interface.

The third part is the most complex. ODI data stores for planning applications are so different from one dimension to another that they require one data store object for each dimension. Figure 10 demonstrates that ODI relies on the “odiRef.getColList” API command to return all mappings done in the target dimension data store, which has the correct dimension format required to load that dimension metadata into Planning.

So the big question is: How can we change the “Load data into planning” step to use a dynamic SQL to create dynamic interface mappings to load to any application/dimension? The answer is to rely again on the “Command on Source/Target” concept and on the planning repository metadata information.

Instead of getting the mapping information from the ODI data store object, we can query the Planning repository to get the same mapping for all dimensions and applications being loaded. The result of this query is a formatted mapping, identical to what ODI would have generated if we used the default planning development—but with the big advantage of being entirely dynamic to any application and dimension.

In Figure 12 we see an example using an Attribute dimension. The command on source will query HSP_OBJECT and HSP_ATTRIBUTE_DIM of a given application (defined by #SCHEMA_APP variable) to retrieve information about one attribute dimension (defined by #DIMENSION variable). Those variables are passed from an external ODI package that will be used to loop all applications and dimensions that we want to load.

If we take a further look into all the different data stores that a Planning application could have, we will see a pattern regarding the information that we need to send to Planning to load metadata, depending on each dimension, as shown in Table 5.

The logic to create the dynamic mapping columns is exactly the same used to create the inbound and extract tables. The only difference is that for the inbound and extract tables we need to put all columns together and for the KM mapping we must, depending on the selected dimension, take the right information in the application repository. This information will help us create the necessary mapping—containing the right source columns and the alias of those columns—which will inform Planning about what that metadata column stands for.

Since our metadata tie out table contains standard columns for all dimensions, we don’t need to worry about adjustments when we change to another dimension, and since our source metadata table already has the metadata information in the correct planning format, we don’t even need any kind of transformation here—it’s just a matter of reading from the metadata source table and loading directly to Planning.

In the Figure 12 example, we will use SRC_MEMBER, SRC_PARENT and SRC_ALIAS as the mapping columns; for the Planning alias, the only one that is dynamic is the member name alias that identifies the dimension name. To get this information, we query the application repository, looking for the attributes into HSP_ATTRIBUTE_DIM and for its name in the HSP_OBJECT table. Finally, we can use the OBJECT_NAME column to get the dimension name alias.

Executing this query, we will get a one-line mapping string that will be passed as a parameter (#MODEL) from “Command on Source” to “Command on Target” and will enable ODI to load metadata to that specific dimension/application. If we execute this interface and look at the query created in the ODI operator, we will see that the result is the same as a default KM would create—but with the big advantage of being entirely dynamic. Following this logic, we would only need to change the value of the #SCHEMA_APP and #DIMENSION variables to get another application/dimension loaded.

Of course, we need to work a little more to get the mapping for the other dimensions (such as Account or Entity), but the idea will be always the same: query the application repository to get the data store information depending on the dimension/application selected.

Tie Out Table Planning Alias Mapping Type
SRC_PARENT Parent 'SRC_PARENT "Parent"' Fixed
SRC_ALIAS Alias: Default 'NVL(SRC_ALIAS,''<NONE>'' "Alias: Default"' Fixed
OPERATION Operation From an option in IKM. Dynamic
SRC_DATASTORAGE Data Storage 'SRC_DATASTORAGE "Data Storage"' Fixed
SRC_TWOPASS_CALC Two Pass Calculation 'SRC_TWOPASS_CALC "Two Pass Calculation"' Fixed
SRC_FORMULA Formula 'SRC_FORMULA "Formula"&#39; Fixed
SRC_DATA_TYPE Data Type 'SRC_DATA_TYPE "Data Type"' Fixed
SRC_CONS_PLAN_TYPE1 Aggregation (Plan1) 'SRC_CONS_PLAN_TYPE1 "Aggregation ('||HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_CONS_PLAN_TYPE2 Aggregation (Plan2) 'SRC_CONS_PLAN_TYPE2 "Aggregation ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_CONS_PLAN_TYPE4 Aggregation (Plan3) 'SRC_CONS_PLAN_TYPE4 "Aggregation ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_CONS_PLAN_TYPE8 Aggregation (Workforce) 'SRC_CONS_PLAN_TYPE8 "Aggregation ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_CONS_PLAN_TYPE16 Aggregation (Capex) 'SRC_CONS_PLAN_TYPE16 "Aggregation ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_PLAN_TYPE1 Plan Type (Plan1) 'SRC_PLAN_TYPE1 "Plan Type ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_PLAN_TYPE2 Plan Type (Plan2) 'SRC_PLAN_TYPE2 "Plan Type ('|| HSP_PLAN_TYPE.TYPE_NAME||')"&#39; Dynamic
SRC_PLAN_TYPE4 Plan Type (Plan3) 'SRC_PLAN_TYPE4 "Plan Type ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_PLAN_TYPE8 Plan Type (Wrkforce) 'SRC_PLAN_TYPE8 "Plan Type ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_PLAN_TYPE16 Plan Type (Capex) 'SRC_PLAN_TYPE16 "Plan Type ('|| HSP_PLAN_TYPE.TYPE_NAME||')"' Dynamic
SRC_ACCOUNT_TYPE Account Type 'SRC_ACCOUNT_TYPE "Account Type"' Fixed
SRC_TIME_BALANCE Time Balance 'SRC_TIME_BALANCE "Time Balance"' Fixed
SRC_SKIP_VALUE Skip Value 'SRC_SKIP_VALUE "Skip Value"' Fixed
SRC_EXC_RATE Exchange Rate Type 'SRC_EXC_RATE "Exchange Rate Type"' Fixed
SRC_VARIANCE_REP Variance Reporting 'SRC_VARIANCE_REP "Variance Reporting"' Fixed
SRC_SRC_PLAN_TYPE Source Plan Type 'SRC_SRC_PLAN_TYPE "Source Plan Type"' Fixed
SRC_CURRENCY Base Currency 'SRC_CURRENCY "Base Currency"' Fixed

Table 6: Dimensions Mapping Information

In Table 6, we see all the possible mapping combinations that we can have in a Planning application for the main planning dimensions. Some information is dynamic (dependent of the planning repository) and some is fixed. Here are some tips for putting everything together in a single query:

  • The majority of the columns are fixed and can be obtained with a simple “select ‘Any string’ from dual”
  • The easiest way to create this SQL is to create separated SQLs for each different kind of information and put everything together using Unions statements
  • Split the final query into small queries to get the different categories presented in Table 5
  • Use the MULTI_CURRENCY column in HSP_SYSTEMCFG table to find out if that application is a multicurrency one or not
  • For aggregations and plan type mapping, get the name of the plan type itself from the HSP_PLAN_TYPE table
  • When the query is ready, add a filter clause to filter the dimension from where that information belongs

With the query ready, the only missing step is to insert it into the “Command on Source” tab inside the Planning KM and pass the string generated by it to the “Command on Target” tab (see Figure 12).

We are now prepared to put everything that we have learned into an ODI package that will dynamically load metadata into any number of Planning applications.

Putting Everything Together: Dynamic Hyperion Planning Metadata Integration In Action

Now, we assemble all the pieces into a generic process that can load any Planning dimension to any application using generic components. The ODI architecture was created to be as modular as it can be, meaning that each component is responsible for a very specific piece of code, and with one main ODI scenario that is responsible to orchestrate and call each one of those specific scenarios needed for this process. A resumed flow of this main ODI scenario looks like Figure 13, below:

The process accepts two input parameters. The first one, “Application Name,” indicates which application the metadata maintenance will be loaded into. In this case, the user can input one application at a time or input “ALL,” indicating that he wants to load metadata to all applications contained in a previously populated parameter table (that will hold the valid application names) or all existing applications in the planning repository, as in Figure 7.

The parameter “Dimension Selection” will indicate which dimension will be loaded to planning. Again, in this case, the user can select one dimension at a time or input “ALL,” indicating that the process will load all existing dimensions of the select application.

After the input parameters, the process is divided into four components: Extract Planning Dimensions, Load Source Metadata, Process Tie out and Load Planning Dimensions. Each of these components may call several ODI child scenarios that will execute its process for one specific application and dimension, allowing us to load everything in parallel, for a huge performance gain. Let’s detail each of these components and see how they behave in the final load process.

Extract Planning Dimensions

The extract process is separated into two components: the first is a very simple procedure that resides in the ODI main package (Figure 13) and is responsible for creating a loop using the “Command on Source” and “Command on Target” concept with all applications that needs to be extracted. The source tab gets all applications that exist in the Planning repository or in a parameter table, and “Command on Target” calls an ODI scenario that contains the second extract component for each line returned from the source tab.

The second extract component gets each application passed from the first process and extracts to the standard extract table all metadata information regarding the dimensions that exist in that Planning application. This second component also relies on “Command on Source” and “Command on Target” to get all existing dimensions, or just the one dimension, passed as a user parameter in the source tab and to insert that dimension data into the extract table in the target tab. At the end of this process we will have the metadata extract table with all metadata information that exists in all Planning applications/dimensions valid for   that execution.

Load Source Metadata

This component is responsible for calling one or more scenarios that will populate the inbound metadata table. Since the metadata may come from different kind of sources (such as Oracle tables, text, csv files and so on) this is not a fixed component and may be altered to add or remove any call to external load scenarios. Those external load scenarios will be developed to populate the inbound metadata table and may contain a lot of requirements, rules and data transformation. This is the key benefit of having a standard inbound table for a metadata maintenance process: you may add as many load processes as you want/need to load only one single inbound table; after that, the load process into Planning is untouched and generic to any number of applications and dimensions, decreasing development and maintenance costs.

Process Tie Out

After having both inbound and extract tables filled with metadata information, the process populates the metadata tie out table as explained above. This procedure does not need to call any auxiliary scenario or create any kind of loop, because all information regarding all applications/dimensions is now placed in the inbound and extract tables, so it is just a matter of reading all that information and populating the metadata tie out table with the correct CONDITION status. At the end of this process, we will end up with a table that has all information regarding what to do with each metadata member that needs to be loaded   into the Planning applications.

Load Planning Dimensions

The load process is also divided into two components: the first is a simple procedure that resides in the ODI main package (Figure 13) and is responsible for creating a loop using the “Command on Source” and “Command on Target” concept with all applications/dimensions that need to be loaded. The source tab gets a distinct list of all applications and dimensions that exist in the metadata tie out table, filtering just the members that don’t have a “Match” CONDITION status. The CONDITION status is filtered here to get only those applications and dimensions that had any member that was changed or that needed   to be loaded, avoiding loading any unnecessary metadata into Planning and increasing performance load times. Then “Command on Target” calls an ODI scenario that contains the second load component for each line returned from the source tab.

The second extract component is responsible for getting each application and dimension passed from the first process and loads all metadata information regarding that dimension from the metadata tie out table. This component is separated into three load steps:

  1. Delete Attribute Members: This step filters the metadata tie out table, checking whether that dimension is an attribute dimension. If yes, it sends to the Planning application all members with the CONDITION status as “No Match” or “Deleted Attribute” using the load operation “Deleted Idescendants.” This will delete all attribute members that changed from one parent to another (because Hyperion Planning does not change attribute members automatically) as well as all attribute members that no longer exist in the source metadata table.
  2. Delete Shared Members: This step sends to the Planning application all members with the CONDITION status “Deleted Share” using the load operation “Deleted Idescendants.” This will delete all shared members that no longer exist in the source metadata table.
  3. Load Planning Members: This step sends to the Planning application all members with a CONDITION status that is not “Match”, “Deleted Share” or “Deleted Attribute” using the load operation “Update.” This will load all new and modified members, shared members and attributes. It will also move all deleted members to their respective deleted hierarchy.

And that finishes the load process. Load all metadata information from any number of Hyperion Planning applications/dimensions using only one generic component with three generic steps. It’s pretty amazing!

Conclusion: Dynamic Integrations in Real Environment

This article demonstrates the challenges of building a centralized and flexible development architecture for Hyperion Planning application, using ODI to maintain metadata for any number of applications and dimensions while having a prepared environment to accept new applications or significant changes in the current ones. But all of this hard work is compensated for by the benefits that this new structure delivers to the business. Table 7 (below) captures the major differences between the old and new structures.

Old Structure New Structure
Metadata Load Process All metadata is loaded every time the process runs Only new or changed metadata is loaded when the process runs
Execution Time 8 hours to finish the process for all regions’ applications 1 hour to finish the process for all regions applications (depends on the amount of data that was changed)
Total Number of ODI Objects Multiple interfaces depending on the number of applications and dimensions

Number of Packages: 19

Number of Interfaces: 51

Procedures: 32

One generic interface that loads metadata regardless of the amount of applications or dimensions

Number of Packages: 8

Number of Interfaces: 6

Procedures: 9

Development Cost Great effort replicating similar interfaces for new applications and dimensions Needs only to configure the process parameter to include the new application or dimension
Maintenance Cost Multiple changing/error points to be verified each time a change is needed Centralized code for all applications. Only one place to change if required.
Scalability Need to create new interfaces to supply new applications No need to create any new interface. Code automatically identifies a new application based on a configuration table.

Table 7: Process Final Results

Those are the biggest impacts identified after the project went live. Subsequently, we’ve already had two other Planning projects: a new Planning application, and another one that changed the entity hierarchy to reflect a new way to present financial data.

The development time for including a new application was minimal, and was restricted to some inserts into the parameter table, plus a union in the procedure (external ODI package) that takes the metadata from the source to the inbound table and transforms the data as Planning requires. That was a great test to see if all the effort paid off—and it certainly did, as the development phase took only four days to complete.

For the second project, there was no development effort because the designed metadata process can handle metadata load as well metadata change or deletions; the only action necessary was to execute the process to get the new data into the Planning application and move the old metadata to the deleted hierarchy, all in a single execution of the metadata load ODI scenario.

Since this project had an established final delivery date, we did the best that we could with limited time. We couldn’t implement such developing ideas as using the Essbase API to get information about the Aggregate Storage Option (ASO) cubes as we did with the Planning application repository—to get its ODI data store information—and include the ASO cubes in this dynamic architecture. In the future, we may be able to include not only the Planning application in the metadata maintenance process but also any ASO cube that the company may have.

ODI is a dynamic and powerful tool that can provide an excellent platform for creating and maintaining EPM environments. This article demonstrates that we are limited in its use only by our imagination. With a few changes in the KM and a structured set of tables we can overcome the boundaries of the default development, achieving a new level of excellence and thereby providing the increased flexibility, reliability and scalability to meet the challenges of a global and competitive environment.

Comments and feedback awaited 🙂



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s