So at the end of last week I posted an overview of the new 7.9.5.2 release of the Oracle BI Applications, which replaces Informatica and the DAC with Oracle Data Integrator and Configuration Manager. I said at the time that we still had SDE (Source-Dependent Extract) and SIL (Source-Independent Layer) mappings, together with PLP and the other mapping types, and whilst execution plans are still around they now called ODI packages and not Informatica Workflows. So what else has changed?
Going through the install and setup process, the basic approach of installing Oracle BI EE and then the BI Applications is the same as before, except this time there are less BI Apps modules to select from (see my previous posting for details). You obviously don’t install Informatica any more, you install ODI 10.1.3.5 instead, and once you’ve got all of these installed it’s time to set up the data sources.
Now setting up data sources is one area that’s greatly improved in this release. With the previous version of the BI Apps, you had to define data sources in both the DAC and the Informatica Workflow Manager, and you also had to define connections between the DAC and Informatica and between the DAC client and the DAC server (and in pre-7.9.5. releases, between the Informatica Repository Server and the Informatica Integration Server). Now you just define your data connections in the ODI Topology Manager, like this:
The connections themselves are defined as JDBC connections, and as database links for Oracle-to-Oracle mapping flows. You also use this interface to define the connections to the interface and workflow (scheduling) agents, and that’s about it (you also need to define the link from the Configuration Manager to the ODI repository, but that takes just a second and again is through JDBC). So score 1 to BI Apps 7.9.5.2 for this welcome simplification.
In the previous releases of the BI Apps, you import the BI Applications repository into Informatica using the Informatica web-based Console, and then you import the DAC repository into the DAC and then create the required containers for your particular source systems. In 7.9.5.2 you first import your ODI Master Repository which contains entries for the various sources and agents you are going to use, like this:
Then shortly afterwards you import the ODI Work Repository, which contains the actual PLP, SIL, SDE etc mappings and the various master execution plans for sequencing them together. All in all this takes a couple of hours, around the same time (possibly a bit shorter) than the Informatica release of the BI Apps.
Once you’ve imported all of the mappings, you can see how they are stored in the Project view of the ODI Designer application.
In the screenshot above, you can see the folders that map to the Adapters that you got in the previous release (SDE_ORA11510_Adaptor, etc), with the naming consistent with the Informatica release but of course only with support for EBS 11.5.10 as a source. Under the Mapping folder are folders for utilities and other tasks that you will need to carry out, together with the master (template) execution plans that are leveraged in the Configuration Manager application.
Taking a closer look at one of the mappings in the SDE folder, you can see that it corresponds to a single mapping, and a single workflow, in the previous Informatica-backed BI Apps releases. If you expand the mapping you can actually see that it’s a “package” (equivalent to a “workflow” in Infomatica), with the package calling a number of ODI interfaces. The ones with blue icons are ODI interfaces as we would normally know them, whilst the ones with a yellow icon correspond to a cross between the “source qualifier” transformations in the Informatica mappings and the “mapplets” that were used to encapsulate access to EBS data areas such as customers or invoices.
In the list above, the AP_XACT_FS interface is used for incremental extraction of AP data from EBS and uses the SQ_AP_INVOICES_ALL interface as a source, something that’s now possible through a new “inline view” knowledge module that ships with the BI Apps. Taking a closer look at the AP_XACT_FS mapping you can see it using the SQ_AP_INVOICES_ALL interface as a data source in the same way that the Informatica mapping used mapplets and source qualifier transformations:
Looking then at the corresponding, SQ_AP_INVOICES_ALL interface, you can see that this is where the main extract logic goes, corresponding to the mapplets and source qualifiers in the Informatica version.
It’s a similar story with the SIL mappings, with SQ interfaces providing an abstraction away from the main data sources and thereafter providing the main data source for the regular ODI interface. One significant different though between the old Informatica mappings and the new ODI ones are that much of what you uses to do with custom Informatica transformations – get the ETL_PROC_WID, check that rows haven’t been inserted already, handle SCD2 history tracking and so on – is either handled by new BI Apps-specific ODI knowledge modules that are selected in the “flow” part of the interface, like this:
or, in the case of SCD2 handling, are defined as part of the column properties in the ODI Designer Model view, like this:
From what I can see these changes are either driven by changes in the way that mappings work in ODI and Informatica – in Informatica, like OWB, a mapping can have several steps and can make use of shared transformations, whereas in ODI mappings translate to simple SELECT, UPDATE etc statements – or because ODI has more in-built data warehousing functionality in the form of it’s knowledge modules so that we don’t need to code a solution for ETL ID handling, or for SCD2 handling, each time we write a mapping. Of course the interesting test of this will be when we try to customize an existing mapping or write our own new one, and of course it’ll be interesting to see how well the ODI-based mappings run compared to the new set-based ODI ones.
There are a number of utilities that come with the mappings, scenarios and interfaces, that you can use to reset (truncate) the data warehouse, mark execution plans as having completed, generate the scenarios at the start, create extract views (presumably over the EBS tables) and so on.
In the next post I’ll look at performing the first data load, and we’ll see how these utilities are used during a data load. It’ll also be interesting to see how execution plan restarts work, how easy it is to debug failed mappings and so on.
The execution plans themselves are stored in the ODI repository along with the mappings, with an initial set of master execution plans that are then added to as you create your own customizations. I’ll cover customizations later, but for now I’ll say that whilst the customization process is similar – you have category 1,2 and 3 customizations and you follow the “safe path” through the predefined interfaces – the way that you preserve customizations through upgrades and patches is a fair bit different. More from this in a later post.
Moving on to the Configuration Manager, again one benefit of this and ODI compared to the DAC and Informatica is that you only have to define tables, columns and so on once (in the ODI repository) and not twice, in the DAC and the Informatica Source and Target Analyzer utilities. One downside though is that you can’t use the Configuration Manager to take a target table, trace it back to the task that loaded it, then trace this back to the SDE task and then the source tables that provided its data, although I’ve seen demos of Oracle BI EE running reports that show the data lineage in the ODI repository, so presumably that will replace this DAC functionality.
Defining the execution plan is as simple as selecting the subject areas (no generation of parameters, and no generation of the ordered task list, so I wonder how the final tasks dependencies are generated when you select more than one subject area?) and then saving the resulting execution plan.
Once the execution plan is defined you can take a look in the Package Structure view, like this:
This is not too dissimilar to the ordered task list in the DAC (though handily, it’s shown in a tree structure), and then you can run it either manually from the Configuration Manager, or manually or scheduled from the ODI Designer application.
Unlike the SIL workflows in Informatica, the SIL packages that are executed from the Configuration Manager contain logic for both incremental and full loads, and they also call lots of individual interfaces and ODI tools rather than just the one Informatica mapping that Informatica workflows call.
The way that execution plans work is obviously a bit different to the Informatica versions of the BI Apps. In those versions, the execution plan exists only in the DAC repository and is used to make calls out to the Informatica Workflow Manager to run individual mappings that correspond to DAC tasks. In the 7.9.5.2 release, the steps to load the warehouse, then to run the SDE, PLP, SIL etc mappings, then within these the mappings that correspond to the HR, OE, Financials etc modules are held in a hierarchy of ODI packages that themselves call the various SIL, SDE and PLP mappings.
Presumably then this is how the “ordered task list” step is skipped, as the sequence in which the mappings are executed are defined within this master execution plan packages which are then run in parallel for each stage in the ETL process (the numbers next to the various stages in the above screenshot). I’ll cover this more in the next two postings, where I’ll look at performing the first load and then making some customizations.
Finally, once you’ve defined the execution plan (something you can only do in the Configuration Manager), you can actually execute and monitor it directly in the DAC Operator application, which gives you greater visibility as to what’s gone wrong and where things are at the moment.
In the view above, you can see the mapping tasks being executed by the interface agent, and the various supporting tasks being executed by the workflow agent. Two of the mappings in the interface list have failed, so when we come to do the full load later in the week we’ll have to see how we can fix these.
No comments:
Post a Comment