I have a scenario where i need to load a table called "name_address" first for storing all the claimant name with a generation of a number (using given formula). This number will be used in other main tables like claim table , transaction .. etc Can u please help me whether constraint based loading will be useful or i need to specify target load plan.
We have a scenario in our project where i have to create different flatfiles as output based on the company number
The issue is 1) we run this workflow once in every month 2) New company numbers and corresponding records may be added anytime during the month ., So each time wen a company is added we have to create a flatfile for that also 3) The name of the flatfile should be "Loss history report for" + company number 4) One more thing is every month wen the workflow runs , the flatfiles have to be places in that month folder . eg : In january the files shud be placed in Jan2012 folder. The folders will be already present., 5) Is it possible to create the folders also inside target every month ?
Let me show the Simple mapping involving Java transformation
Here the source is emp table and the target is rangasp table in oracle.
The requirement here is to change all the employee name to "java".
The java code has been written in such a way that the ename of all records will be changed to "java"
The left side contains all the input and output ports.
Generaterow() is used to generate a row for each records.
The output is something like this :
eno ename
721 java
678 java
899 java
Please do comment and ask if you have any doubts!! thankyou
empno ename job
7219 Sri CLERK 7782 Uma MANAGER 7839 Ragu PRESIDENT 7934 Sendil CLERK
Output :
Repeation of the rows three times and also change the ename as "Srivatsan"
empno ename job
7219 Srivatsan CLERK
7219 Srivatsan CLERK
7219 Srivatsan CLERK 7782 Srivatsan MANAGER
7782 Srivatsan MANAGER
7782 Srivatsan MANAGER 7839 Srivatsan PRESIDENT
7839 Srivatsan PRESIDENT
7839 Srivatsan PRESIDENT 7934 Srivatsan CLERK
7934 Srivatsan CLERK
7934 Srivatsan CLERK
Post your mapping , one lucky winner will be selected!!
Lets discuss about transaction control transformation in this video :
I will show you a scenario using transaction control statement and explain you wen it has to be used.
Here we can see three components
1) expression trans
2) transaction ctrl t/f
3) target
The source for us is the emp table of scott schema
the requirement is to split the source rows into different flatfiles based on the dept number.
that is:
dept no : 10 , 20 ,30 need to splited and saved in seperate files
We can ask a question like "why cant this be done easily by a router"
yep a "good ques : the answer is "wen the dept no changes dynamically how we will be able to get the router groups to change??
So this is how this scenario is handled..
here we have filed where we are checking this condn : IIF(v_DEPTNO_2 != DEPTNO, 0, 1)
if a dept number comes for the first time then this will be zero and our transaction control t/f will create a flatfile with corresp rows.
filename is a field used to provide name of flatfile which need to be generated dynamically.
Based on the column where we gave condn the tc t/f does the following commiting activity
IIF(DEPTNO_SPLITTING = 0, TC_COMMIT_BEFORE, TC_CONTINUE_TRANSACTION)
here wenever a new dept number comes a commit is been done and a flatfile is created and one imp note is "the input has to be sorted initially"
In this way the files are created on each dept numbers
Source Qualifier Transformation is an active Transformation. The reason is the output rows count can be modified by using option like "Select Distinct".
1) We can get the Output from Source Qualifier as Sorted by using the property called "Number of Sorted Ports".
2) We can filter in the Source level itself so that unnecessary records wont be taken into for further process using the Filter property.
3) We can use the "Select Distinct" property to Avoid the Duplicate output from Source Qualifier.
4) We have option to write Pre and Post Sql ., Which will be executed before and after the main Query.
5) We can use the Custom / User defined Sql property to write our Own Sql , So that output is generated based on that.
Note : When we use a user defined/Custom Sql , the Select distinct , Sorter , filter ..etc wont work as it will be overridden by the Custom Sql.
Whenever we have only one Source Qualifier transformation and multiple targets , our approach in specifying the order of the target is by using the property called "Constraint based loading".
By enabling the Check box the targets will be loaded in a way like the master table is loaded first and then the depending tables .
If we have more than one Source Qualifier transformation then we have to use Target Load plan.
Target Load plan is used to load the targets in whichever order we require .
Target Load plan can be used when the mapping has more than one Source Qualifier transformation.
The Order can be specidfied by going to mappings -> Target load plan.
Here we have to order the various Source Qualifier transformations used in our mapping. Thereby it will load the corresponding Target.
Here the transformation which we have to use is Sorter and Aggregator .
Lets see the approach :
1) The Input from the Source Qualifier is sent into the Sorter transformation , And Sorted based on Student name and Subject.
The following output is obtained
Student Name
Subject Name
Marks
Sam
Maths
100
Sam
Life Science
70
Sam
Physical Science
80
John
Maths
75
John
Life Science
100
John
Physical Science
85
Tom
Maths
80
Tom
Life Science
100
Tom
Physical Science
85
2) The output from Sorter transformation is sent into the aggregator transformation and Group by Student name is selected . As we need the final output grouped with student name.
And also in the aggregator transformation three columns are created like
1) Maths --- with formula -- Max(Marks, Subject = 'Maths')
2) Life Science --- with formula -- Max(Marks, Subject = 'Life Science')
3) Physical Science --- with formula -- Max(Marks, Subject = 'Physical Science')
3) Now Student name and Maths , Life Science , Physical Science are connected to the target.
The output obtained is as follows :
Source Qualifier transformation is used to join homogeneous sources.Whenever we are using a flat-file the Source Qualifier properties gets disabled and we cant use it to join the Sources.
This is one of the reason why we have Joiner transformation , Also when we have Sources which don't have a common column / primary-foreign key relationship , we have to go to joiner transformation.
Joiner T/f is used for following
1) Join data from different Relational Databases. 2) Join data from different Flat Files. 3) Join relational sources and flat files. Sources which don't have common column can be joined by creating a dummy column using Sequence generator transformation.
Here we have a scenario to get the below mentioned output from the given input. Let us see how to solve this scenario :
Input
Employee name Basic DA HRA MedicalA FoodA
John 5000 200 1000 500 500
George 6000 100 800 500 200
Carol 3000 700 700 800 900
Output Required :
Employee name Salary_type Salary_Amt
John Basic 5000
John DA 200
John HRA 1000
John MedicalA 500
John FoodA 500
George Basic 6000
George DA 100
George HRA 800
George MedicalA 500
George FoodA 200
Carol Basic 3000
Carol DA 700
Carol HRA 700
Carol MedicalA 800
Carol FoodA 900
Answer :
1) The input from the Source has to be connected to Normalizer transformation .
2) In the normalizer transformation we will get two outputs ., One is Employee name and other is the Amount. The GCID number is also generated , which will give the salary type.
3) In this way the normalizer transformation generates transpose of the input(table).
4) The output from normalizer can be given to an expression transformation to get the Salary type which can be generated from GCID number.
It will be like
Employee name GCID Salary_Amt
John 1 5000
John 2 200
John 3 1000
John 4 500
John 5 500
George 1 6000
George 2 100
George 3 800
George 4 500
George 5 200
Carol 1 3000
Carol 2 700
Carol 3 700
Carol 4 800
Carol 5 900
4) Its an active transformation and the reverse of aggregator transformation which reduces the number of output.
Suppose we have to load three targets in such a way like
1) the first target should be loaded with 1,4,7... records
2) the second target should be loaded with 2,5,8... records
and
3) the third target should be loaded with 3,6,9 ... records
How can this be achieved?
Answer :
1) Lets have a expression transformation and load all the data from Source Qualifier to it.
2) Now we will create a Sequence generator transformation to generate the Sequence for identifying the records to be loaded for various targets.
3) Let the Connect the Sequence generator to the expression transformation
4) Lets use a router transformation to distribute the records to the three targets by creating three groups in it. Namely target1 , target2, target3
5) The following conditions are given on the three targets in router transformation to properly send the records to respective targets.
target1 --- mod(Sequence_no,2) = 1
target2 --- mod(Sequence_no,2) = 2
target3 --- mod(Sequence_no,2) = 0
6) The Outputs of router is connected to respective targets.
If the Source is a Flatfile then it cannot be Prevented by Source Qualifier to give Distinct Data.
For avoiding the duplicate rows we have to use a transformation called Sorter transformation and Select the option called as "Distinct".
One more option to avoid the duplicate rows is by using an aggregator transformation . A groupby in any one of the duplicate column will provide us a result of non duplicate records.