Simple Customization of the Oracle BI Applications Data Warehouse - OBIEE GURU

govtjobonline-Latest Government Jobs 2023 Recruitment Notifications

Latest Govt Jobs Whatsapp Group Links-Telegram

For Latest Govt Jobs 2023 or for Freejobalerts or Join

Latest Government Jobs 2023

Wednesday 10 October 2012

Simple Customization of the Oracle BI Applications Data Warehouse


    n today’s posting on the Oracle BI Applications, we’re going to take a look at adding custom data into the BI Apps data warehouse. In my experience there are a number of common scenarios where new data needs to be added to the BI Apps data warehouse, including:
    1. Adding new columns, or new flexfields, to existing fact and dimension tables already loaded into the warehouse
    2. Adding data from a custom source, or one that’s not supported for a particular subject area (i.e. Oracle CRM for the Sales Analytics subject areas) into existing warehouse structures, and
    3. Adding completely new tables, dimensions and facts into the warehouse and loading them either from supported or non-supported sources.
    As you can see from this list, the types of data sources that you might want to load into the warehouse include
    1. Sources that already have adapters and existing load routines for a particular subject area; for example, Siebel CRM for the various Sales Analyics modules, Oracle E-Business Suite, PeopleSoft and SAP for the ERP modules.
    2. Source that have adapters but do not have load routines for a particular subject area; this covers all of the ERP sources for the Sales Analytics modules, and
    3. Sources where no adapters and by definition no load routines exist
    Going by size of effort, the simplest customizations are where you have an existing supported source and existing load routines and you want to add a new column to a fact or dimension table; along with extracting from an unsupported source (these use what’s called the “Universal” adapter”) these are called “Category 1″ customizations in the BI Apps documentation and there are naming standards for the extra columns you add and the mappings you create together with a recommended way in which you enhance and modify the existing mappings.
    0-Cat Types
    The next most complicated is where you want to add new fact and dimension tables from either a supported or unsupported source. These require you to write your own SIL (Source Independent Load, into the warehouse structures from staging tables) and SDE (Source-Dependent Extract) from the system you’re extracting from, and there are various naming standards for the facts, dimensions, underlying tables and the columns within them, as well as standards around set columns you need to include so that the tables can be managed by the DAC.
    The most complicated routines of all are where you add new rows into existing fact tables from additional sources, as you need to carefully co-ordinate your own mappings with the existing predefined ones and make sure all your metadata and control metadata lines up correctly.
    In this posting then, I’ll start off simply and take a look at an example of a Category 1 customization. To get across some of the common concepts such as registering the new mappings in the DAC, in this first example I won’t be concerned with the SIL and SDE mappings that are generally used to load data into the warehouse, and instead I’ll create my own simple mapping. In the next posting I’ll take a look at modifying existing mappings to load in new data, and later on I’ll look at Category 2 and 3 customizations and also how the BI Apps handles features such as changed data capture and Slowly Changing Dimension Type 2 loads.
    To take an example then, we have a table in the warehouse called W_CUSTOMER_LOC_D that represents the customer location dimension. We are going to take this dimension table and add some additional columns in to it to hold data on satisfaction levels. To do this, we need to import in any new source tables, amend the target table to add the new columns and then create a new mapping to bring in the updated dataset.
    To start this process off I firstly go into the Repository Manager application and create a new CUSTOM_ETL folder, then switch into the Design application to import in the new source tables, including a new one from the Oracle E-Business Suite dataset and a staging table from the warehouse schema.
    2-Source Data
    Now if you were able to take a look at the W_CUSTOMER_LOC_D table within SQL*Plus, say, you’d see that amongst all the IDs, level information and attributes there is a column called X_CUSTOM that marks the “safe path” through the table.
    SQL> desc w_customer_loc_d
     Name                                      Null?    Type
     ----------------------------------------- -------- ---------------------
     ROW_WID                                   NOT NULL NUMBER(10)
     CUSTOMER_WID                                       NUMBER(10)
     ADDRESS_TITLE                                      VARCHAR2(255 CHAR)
     ST_ADDRESS1                                        VARCHAR2(255 CHAR)
     ST_ADDRESS2                                        VARCHAR2(255 CHAR)
     CITY                                               VARCHAR2(80 CHAR)
     COUNTY                                             VARCHAR2(80 CHAR)
     POSTAL_CODE                                        VARCHAR2(50 CHAR)
     STATE_CODE                                         VARCHAR2(50 CHAR)
     STATE_NAME                                         VARCHAR2(80 CHAR)
     STATE_REGION                                       VARCHAR2(80 CHAR)
     COUNTRY_CODE                                       VARCHAR2(50 CHAR)
     COUNTRY_NAME                                       VARCHAR2(80 CHAR)
     COUNTRY_REGION                                     VARCHAR2(80 CHAR)
     PHONE_NUM                                          VARCHAR2(30 CHAR)
     FAX_NUM                                            VARCHAR2(30 CHAR)
     EMAIL_ADDRESS                                      VARCHAR2(255 CHAR)
     WEB_ADDRESS                                        VARCHAR2(255 CHAR)
     AUTO_ROUTING_CODE                                  VARCHAR2(50 CHAR)
     ADDR_LATITUDE                                      NUMBER(22,7)
     ADDR_LONGITUDE                                     NUMBER(22,7)
     PRMRY_CNTCT_NAME                                   VARCHAR2(255 CHAR)
     ACTIVE_FLG                                         CHAR(1 CHAR)
     CREATED_BY_WID                                     NUMBER(10)
     CHANGED_BY_WID                                     NUMBER(10)
     CREATED_ON_DT                                      DATE
     CHANGED_ON_DT                                      DATE
     AUX1_CHANGED_ON_DT                                 DATE
     AUX2_CHANGED_ON_DT                                 DATE
     AUX3_CHANGED_ON_DT                                 DATE
     AUX4_CHANGED_ON_DT                                 DATE
     SRC_EFF_FROM_DT                                    DATE
     SRC_EFF_TO_DT                                      DATE
     EFFECTIVE_FROM_DT                         NOT NULL DATE
     EFFECTIVE_TO_DT                                    DATE
     DELETE_FLG                                         CHAR(1 CHAR)
     CURRENT_FLG                                        CHAR(1 CHAR)
     W_INSERT_DT                                        DATE
     W_UPDATE_DT                                        DATE
     DATASOURCE_NUM_ID                         NOT NULL NUMBER(10)
     ETL_PROC_WID                              NOT NULL NUMBER(10)
     INTEGRATION_ID                            NOT NULL VARCHAR2(80 CHAR)
     SET_ID                                             VARCHAR2(30 CHAR)
     TENANT_ID                                          VARCHAR2(80 CHAR)
     X_CUSTOM                                           VARCHAR2(10 CHAR)
    
    The customization standard for the Oracle BI Applications states that firstly, any new columns should be added after this X_CUSTOM column, and they should all be named with the same “X_” prefix to show that they are custom columns. I therefore take a copy of this table so that I don’t lose the original version, and then add the new column that I want to bring in from the new source.
    SQL> create table lab_w_customer_loc_d
      2  as select *
      3  from w_customer_loc_d
      4  where 1=0
      5  /
    
    Table created.
    
    SQL> alter table lab_w_customer_loc_d
      2  add x_outreach_zone varchar2(50);
    
    Table altered.
    
    Now for this example, I’m going to load into this new table, however in a real situation I would of course load into the existing warehouse dimension table, for which I’d need to update the metadata in the Informatica Repository to reflect the new column.
    Before I can load data into the new table, I need to add it as a new target within my repository. I do this using the Designer application and then end up with the following folder contents:
    3-Target Data
    Now we’ve got the sources and targets defined, it’s time to create the mapping. We could either create a copy of an existing mapping and make our amendments to that, or in this case we’ll just create a completely new mapping that I’ll include in a minute in our DAC load of the warehouse.
    4-Type 1 Mapping
    Now it’s worth pointing out at this stage that this is a pretty simple customization, as we’ve not had to customize any existing mappings, we’ve created a completely new one from scratch. If I wanted to customize the existing mapping that loaded the table instead, I’d find it in the repository, copy it to the folder I created earlier and then add my new columns to the mapping, making use I added any new columns after the “safe path” through the mapping objects.
    5-Edit Existing Mapping
    For now though we’ll leave the editing, and creation of SIL and SDE objects until tomorrows posting, the last in the series.
    Now that we’ve got the Informatica mapping and the new source and target tables, it’s time to move on to the DAC Console and register these new objects within the DAC repository, as up until now the DAC has no knowledge of any of the new tasks and tables we’ve created.
    First of all we need to create references to the folders we just created in Informatica within the DAC repository, and register these as new data sources.
    7-Crete New Source System Folders
    Then we have to tell the DAC what types of tables these are; LAB_CUST_OUTREACH is a source and LAB_W_CUSTOMER_LOC_D is a dimension.
    10-Set Table Type
    These descriptions are then stored in the DAC repository and are then displayed to administrators when they are trying to establish the role of a particular table in a load routine.
    Once the tables are in the repository, the next job is to create a new task and associate it with the Informatica mapping that we created in the previous step. The name of the mapping in the DAC repository needs to correspond to the workflow name in the Informatica repository as it’s this name that the DAC process will use when calling the map routine when you come to execute the load process.
    12-Enter Task Details
    Then you need to specify the source and target tables for the task, like this:
    13-Denote Source And Targets
    Then, you need to create a new subject area in the repository and add this new task to it, so that when the new subject area is selected, DAC knows what ETL tasks are required to populate it.
    You’re now in a situation where the tasks are all defined and the base metadata set up. The next step is to switch to the Execute view in the DAC and create a new execution plan, like I did in this previous post where I ran the prebuilt ETL routines. As with the previous exercise, it’s a case of selecting the subject area, generating the parameters and then filling them in, and then generating the ordered list of tasks that loads the area.
    15-Build Execution Plan
    Then it’s just a case of running the execution plan and checking it’s completed OK
    16-Run Execution Plan
    So there we have it, a simple customization added to the Informatica Repository, the Oracle Business Analytics Warehouse and the DAC Repository. We’ve got the DAC to run the mapping along with any other subject areas we include in the execution plan, and now we’re ready to move on to the last posting in this series, where we take a set of new tables and use them to add some additional attributes to an existing dimension, and a new set of fact and dimension tables that will co-exist with the rest of the business analytics warehouse.

    Comments

    1. BVL Narayana Says: 
      Mark,
      Lets say we modify an existing mapping given by oracle to load data into custom cols. Assume there is some error or upgradation given by oracle. Since i have ‘changed’ code given by oracle wud it be supported ? also going forward for all the upgradations whose resp. is it to carry forward all these customizations ?
      Your comments/advice on this are much appreciated.
      thanks
      Narayana.
    2. Peter Scott Says: 
      @BVL Narayana
      A good point about code changes and Oracle updates.
      When you modify mappings you should work with copies of the original mapping. When you add columns to a table you add them after the X_CUSTOM column and use the X_ prefix convention. When you apply patches or updates it will update the original supplied mapping and not your new vdersion. It is down to your developers to apply the custom changes to a new copy the mapping.
      By keeping to the convention of column names and working with cloned copy mappings and not the original ones supplied by Oracle it should be a manageable, supportable process.
    3. Gordon Says: 
      To add to the original posted question, if I extend to add content, or modify existing content to a delivered fact/dimension as required by the business – will Oracle still support the content as delivered?
    4. Mark Rittman Says: 
      @Gordon – Yes, my understanding is that Oracle will support you, if you extend the model according to their methodology. If you read the article after this one in the series (http://www.rittmanmead.com/2008/07/11/amending-existing-category-1-sde-and-sil-mappings-in-oracle-bi-apps-795/) it goes through an example on how this process works.
      regards, Mark
    5. prasad mallela Says: 
      sir,
      i have been working on DAC 7.9.4 for about 1 month.i have made some customization in the informatica such as adding columns etc.but i did not copy the exsisting mapping.after reading your article i realized the we need to copy.if i copy the the exsisting mapping to a different folder in informatica, and register in dac,what about the exsisting task associated with the original mapping in the dac.do i need to disable that.how can i make it work in order to reflect changes that i made in copied mapping.
      thank you,
      prasad
    6. Sesh P Says: 
      Hi,
      Any ideas on how to work with Universal Adapters for Service Contracts please?
      Any help or reference to a documentation would be helpful
      Thanks
      Sesh
    7. Saravana Prakash Says: 
      We are currenly implementing Oracle Business Intelligence Applications .
      Some of the OBIA applications we bought are
      1.Supply chain and Order Management Analytics
      2.Financial Analytics
      3.Pricing Analytics .
      We are Implementing Oracle R12 suite of applications and please note we are not using any Siebel CRM systems
      We were able to sucessfully implement the Supply chain and Order Management Analytics and able to see various
      dashboards and reports .However we have problem in the Pricing Analytics
      Here is our understanding about the Pricing Analytics .It will be great if you confirm our understanding is correct or not.
      1.Currently Pricing Analytics is NOT connected to Oracle R12 as one of it data sources ? Is this correct ?
      2.Can we use universal adaptor to connnect to R12 to see the data in the Pricing Analytics?
      3.If it can be done ,Are there any documentation to assist the mapping /configuration etc..?
      4.What is oracle’s direction for the Pricing Analytics ,If you look tradionally/currently advanced pricing module is always
      a packaged application with Order Managment suite of applications in oracle ,However oracle aligns advanced pricing with CRM
      set of applications ,With that in mind how will the pricing analytics will be handled in future ?
      Kind Regards
      Saravana Prakash

    Whatsapp group

    Govt Jobs Whatsapp Group Links 2023

    Telegram group

    Govt Jobs Telegram Group

    Facebook group

    Govt Job Online Facebook Group

    Follow on Facebook group

    Govt Jobs Online Facebook Page

    Subscribe to Youtube Channel

    Govtjobsonline Youtube

    Join Latest Govt Jobs Telegram Group here

    Join Government Jobs Whatsapp Group here (State Wise)

    Join State Wise Sarkari Results Telegram here

    1. ITI Jobs Notifications
    2. ITI Apprenticeship Program
    3. Diploma Jobs Notifications
    4. Engineering Jobs Notifications
    5. UPSC Jobs Notifications
    6. Court Jobs Notifications
    7. Staff Nurse Jobs Notifications
    8. NHM Jobs Notifications
    9. Exam Syllabus and Pattern
    10. SSC Recruitment

    1 comment:

    1. tulisan yang anda buat sangat menarik, saya juga punya tulisan yang menarik, kamu bisa kunjungi di http://repository.gunadarma.ac.id/bitstream/123456789/2979/1/78.pdf

      ReplyDelete

    Latest Govt Jobs Vacancy 2023