Thursday, 29 May 2014

How the schedule or unschedule workflow using cron job in unix?

Setting the job execution time

With at, you must specify a time the job should execute. The format you use to indicate the time is very flexible and may consist of the following:
  • Time: Enter a one- or two-digit number (0-23) to indicate the start of an hour on a 24-hour clock (e.g., 13 is 13:00 or 1:00pm). To schedule the job to occur at a time other than the start of the hour, add minutes (00-59), optionally separated from the hour by a colon (e.g., 1334 or 13:34). You may follow the number with an am or pm to indicate the specific time on a 12-hour clock (e.g., 1:34pm or 0134pm). You may also use the words nownoon, andmidnight to indicate the current time. If the time you indicate is later than the current time and you haven't specified a date, the job will execute the same day; otherwise, it will execute the next day.
  • Date: You can schedule a job to execute on a specific date. You can use keywords, such as todaytomorrow, or one of the days of the week. The job will execute at the soonest possible date that meets the requirements. You also may enter a fully qualified date, such as November 9, 2010 . The year is optional, and you can use an abbreviation for the month. As long as the date is unambiguous, your job request will probably succeed.
  • Increment: You also can specify the execution time by indicating how far in the future it should be, relative to the current time. To do this, enter a plus sign ( + ), followed by a number and then one of the following keywords:

    • minutes
    • hours
    • days
    • months
    • years
    For example, if the current time is 12:00pm, the increment of +2 weeks would set the execution time at noon on a day two weeks hence.
These elements can be combined, for example:
  • To schedule a job for the following Tuesday at a time two hours later than the current time, use: at tuesday +2 hours
  • To set the execution time for 9:00 in the morning on the second day of next February, use: at 9am February 2
  • To set the execution time for 1:34pm on a date exactly three months from when you issue the at command, use: at 1334 +3 months

Thursday, 8 May 2014

Informatica Interview Questions set-8

1.what is a commit interval and explain the types?
A commit interval is the interval at which power center server commits data to targets during a session. The commit interval the number of rows you want to use as a basis for the commit point.
Target Based commit: The power center server commits data based on the number of target rows and the key constraints on the target table. The commit point also depends on the buffer block size and the commit interval.
2.Ddifference between connected and unconnected lookup?
 Connected lookup
    Unconnected lookup
Receives input values directly from the pipe line.
Receives input values from the result of a clkp expression in a another transformation.
U can use a dynamic or static
Cache
U can use a static cache
Cache includes all lokkup columns used in the mapping(that is lookup table columns included in the lookup condition and lookup table columns linked as output ports to other transformations)
Cache includes all lookup/output ports in the lookup condition and the lookup/return port.
Can return multiple columns from the same row or insert into the dynamic lookup cache.
Designate one return port(R).Returns one column from each row.
If there is no match for the lookup condition, the informatica server returns the default value for all output ports.If u configure dynamic caching the informatica server inserts rows into the cache.
If there is no matching for the lookup  condition the informatica server returns NULL
Pass multiple output values to another transformatnion.Link lookup/output ports to another transformation
Pass one output value to another transformation.The lookup/output/return port passes the same value to the ---------------------------------------------------------
Supports user-defined default values.
Does not support user-defined default values.
6 3.explain index cache and data cache?
 The informatica server stores conditions values in the index cache and output values in the data cache.
4.explain a work flow process?
 The power center server uses both process memory and system shared memory to perform these tasks.
Load manager process:  stores and locks the workflow tasks and start the DTM run the sessions.
Data Transformation Process DTM: Perform session validations,create threads to initialize the session,read,write and transform data, and handle pre and post session operations.
     The default memory allocation is 12,000,000 bytes.
5.How the informatica server sorts the string values in rank transformation?
When the informatica server runs in the ASCII data movement mode it sorts session data using binary sort order.If you configures the session to use a binary sort order, the informatica server calculates the binary value of each string and returns the specified number of rows with the highest binary values for the string.
6.can you use the mapping parameters or variables created in one mapping into another mapping?
NO, we can use mapping parameters or variables in any transformation of the same mapping or mapplet in which have crated mapping parameters or variables.
7.Difference between the source filter and filter?
 Source filter is filtering the data only relational sources. Where as filter transformation filter the data any type of source.
8.What is data driven?
 The information server follows instructions coded into update strategy transformations with in the session mapping determine how to flag records for insert,update,delete or reject if u do not choose data driven option setting , the informatica server ignores all update strategy transformations in the mapping.
9.Different Between OLTP and OLAP
OLTP
OLAP
1
Application Oriented (e.g., purchase order it is functionality of an application)
Subject Oriented (subject in the sense customer, product, item, time)
2
Used to run business
Used to analyze business
3
Detailed data   
Summarized data
4
Repetitive access
Ad-hoc access
5
Few Records accessed at a time (tens), simple query
Large volumes accessed at a time(millions), complex query
6
Small database
Large Database
7
Current data
Historical data
8
Clerical User
Knowledge User
9
Row by Row Loading
Bulk Loading
10
Time invariant
Time variant
11
Normalized data
De-normalized data
12
E – R schema
Star schema
10.What are the types of datawarehousing?
EDW (Enterprise datawarehousing)
ü It provides a central database for decision support throughout the enterprise
ü   It is a collection of DATAMARTS
DATAMART
ü   It is a subset of Datawarehousing
ü   It is a subject oriented database which supports the needs of individuals depts. in an organizations
ü   It is called high performance query structure
ü   It supports particular line of business like sales, marketing etc..
ODS (Operational data store)
ü   It is defined as an integrated view of operational database designed to support operational monitoring
ü   It is a collection of operational data sources designed to support Transaction processing
ü   Data is refreshed near real-time and used for business activity
ü   It is an intermediate between the OLTP and OLAP which helps to create an instance reports
11.Data Modeling Types:
ü   Conceptual Data Modeling
ü   Logical Data Modeling
ü   Physical Data Modeling
ü   Dimensional Data Modeling
1. Conceptual Data Modeling
ü   Conceptual data model includes all major entities and relationships and does not contain much detailed level of information about attributes and is often used in the INITIAL PLANNING PHASE
ü   Conceptual data model is created by gathering business requirements from various sources like business documents, discussion with functional teams, business analysts, smart management experts and end users who do the reporting on the database. Data modelers create conceptual data model and forward that model to functional team for their review.
ü   Conceptual data modeling gives an idea to the functional and technical team about how business requirements would be projected in the logical data model.

2. Logical Data Modeling
ü   This is the actual implementation and extension of a conceptual data model. Logical data model includes all required entities, attributes, key groups, and relationships that represent business information and define business rules.
3. Physical Data Modeling
ü   Physical data model includes all required tables, columns, relationships, database properties for the physical implementation of databases. Database performance, indexing strategy, physical storage and demoralization are important parameters of a physical model.
Logical vs. Physical Data Modeling
Logical Data Model
Physical Data Model
Represents business information and defines business rules
Represents the physical implementation of the model in a database.
Entity
Table
Attribute
Column
Primary Key
Primary Key Constraint
Alternate Key
Unique Constraint or Unique Index
Inversion Key Entry
Non Unique Index
Rule
Check Constraint, Default Value
Relationship
Foreign Key
Definition
Comment
Dimensional Data Modeling
ü   Dimension model consists of fact and dimension tables
ü   It is an approach to develop the schema DB designs
Types of Dimensional modeling
ü   Star schema
ü   Snow flake schema
ü   Star flake schema (or) Hybrid schema
ü   Multi star schema
12.what are Important aspects of Star Schema & Snow Flake Schema ?
ü   In a star schema every dimension will have a primary key.
ü   In a star schema, a dimension table will not have any parent table.
ü   Whereas in a snow flake schema, a dimension table will have one or more parent tables.
ü   Hierarchies for the dimensions are stored in the dimensional table itself in star schema.
ü   Whereas hierarchies are broken into separate tables in snow flake schema. These hierarchies help to drill down the data from topmost hierarchies to the lowermost hierarchies.
13.what are the Types of Facts Tables?
  • Additive - Measures that can be summed up across all dimensions.
    • Ex: Sales Revenue
  • Semi Additive - Measures that can be summed up across few dimensions and not with others
    • Ex: Current Balance
  • Non Additive - Measures that cannot be summed up across any of the dimensions.
    • Ex: Student attendance
14.WHY NEED STAGING AREA FOR DWH?
ü   Staging area needs to clean operational data before loading into data warehouse.
ü   Cleaning in the sense your merging data which comes from different source.
ü   It’s the area where most of the ETL is done
Data Cleansing
ü   It is used  to remove duplications
ü   It is used to correct wrong email addresses
ü   It is used to identify missing data
ü   It used to convert the data types
ü   It is used to capitalize name & addresses.


                                 Informatica Architecture

The Power Center domain
            It is a primary unit of the Administration.
            Can have single and multiple domains.
            It is a collection of nodes and services.
Nodes
   A node is the logical representation of a machine in a domain
One node in the domain acts as a gateway node to receive service requests from clients and route them to the appropriate service and node
Integration Service:
Integration Service does all the real job. It extracts data from sources, processes it as   per the business logic and loads data to targets.
Repository Service:
Repository Service is used to fetch the data from the repository and sends it back to the     requesting components (mostly client tools and integration service)
Power Center Repository:
Repository is nothing but a relational database which stores all the metadata created in  Power Center.
Power Center Client Tools:
 The Power Center Client consists of multiple tools.
Power Center Administration Console:
This is simply a web-based administration tool you can use to administer the Power Center installation.

Thanks
Ur's Hari
If you like this post, please share it by clicking on g+1 Button. 

what is Informatica Project Architecture or what is your project Architecture ?



SQL Interview Questions Set-1

1)      FIRST AND LAST RECORD FROM EMP TABLE?
SELECT * FROM EMP WHERE ROWNUM=1 UNION
SELECT * FROM EMP WHERE ROWID=(SELECT MAX(ROWID) FROM EMP);
2)      DISPLAY Nth RECORD FROM EMP TABLE?
SELECT * FROM (SELECT ROWNUM AS RNO,EMP.* FROM EMP) WHERE RNO=&N;
3)      DISPLAY 5th AND 8th  RECORD’s FROM EMP TABLE;
SELECT * FROM (SELECT ROWNUM AS RNO,EMP.* FROM EMP) WHERE RNO IN(5,9) ;
4)      DISPLAY ALTERNATIVE ROW’s FROM EMP TABLE?(ODD NUMBER ROWS)
SELECT * FROM (SELECT ROWNUM AS RNO,EMP.* FROM EMP) WHERE MOD(RNO,2)=1;
5)      DISPLAY ALTERNATIVE ROW’s FROM EMP TABLE?(EVEN NUMBER ROWS)
SELECT * FROM (SELECT ROWNUM AS RNO, EMP.* FROM EMP) WHERE MOD (RNO, 2) =1;
6)      DISPLAY LAST RECORD FROM EMP TABLE?
SELECT * FROM EMP WHERE ROWID=(SELECT MAX(ROWID) FROM EMP);
7)      DISPLAY LAST 3 RECORD’s FROM EMP TABLE?
SELECT * FROM (SELECT * FROM EMP ORDER BY ROWID DESC) WHERE ROWNUM<=3;
8)      DISPLAY FIRST 3 RECORD’s AND LAST 3 RECORD’s  FROM EMP TABLE?
SELECT * FROM EMP WHERE ROWNUM<=3 UNION
SELECT * FROM (SELECT * FROM EMP ORDER BY ROWID DESC) WHERE ROWNUM<=3;
9)      DISPLAY 5th MAXIMUM SALARY OF EMP TABLE?
SELECT * FROM (SELECT DENSE_RANK() OVER (ORDER BY SAL DESC) AS RNK,
EMP.* FROM EMP) WHERE RNK=5;
10)  DISPLAY MAXIMUM SALARY OF EMP TABLE USING JOIN’s?
SELECT DISTINCT (A.SAL) FROM EMP A WHERE &N=(SELECT COUNT (DISTINCT (B.SAL)) FROM EMP B
WHERE A.SAL<=B.SAL);
11)  DISPLAY SALES DEPT AVG(SAL) FROM EMP TABLE?
SELECT AVG(SAL) FROM EMP WHERE DEPTNO=(SELECT DEPTNO FROM  DEPT WHERE DNAME='SALES');
12)  DEPT WISE MAXIMUM SALARY OF EMP TABLE?
SELECT * FROM EMP WHERE (DEPTNO,SAL) IN (SELECT DEPTNO,MAX(SAL) FROM EMP GROUP BY DEPTNO);
                                    (OR)
SELECT * FROM EMP  E WHERE SAL=(SELECT MAX(SAL) FROM EMP WHERE DEPTNO=E.DEPTNO);
13)  DISPLAY  AVG(SAL) FROM EMP TABLE?
SELECT EMPNO,ENAME,SAL,(SELECT AVG(SAL) FROM EMP) AVGSAL FROM EMP;
14)  DISPLAY TOP 3 SALARIES FROM EMP TABLE?
SELECT * FROM EMP WHERE 3> (SELECT COUNT(DISTINCT SAL) FROM EMP E WHERE EMP.SAL<E.SAL);
15)  DISPLAY TOP 3 SALARY FROM EMP TABLE?
SELECT * FROM EMP E WHERE 2=(SELECT COUNT(DISTINCT SAL) FROM EMP WHERE SAL>E.SAL);
16)  DISPLAY DUPLICATE  RECORD’s FROM EMPTABLE?
SELECT * FROM EMP  WHERE ROWID NOT IN (SELECT MAX(ROWID) FROM EMP GROUP BY ENAME);
17)  DISPLAY DISTINCT RECORD’s FROM EMP TABLE?
SELECT * FROM EMP  WHERE ROWID IN (SELECT MAX(ROWID) FROM EMP GROUP BY ENAME);
18)  DISPLAY EMPLOYEE WITH CORRESPONDING MANAGER’s?
SELECT A.ENAME EMPLOYEE ,B.ENAME MANAGER FROM EMP A INNER JOIN EMP B ON(A.EMPNO=B.MGR);
19)  DISPLAY BOTTOM   N NUMBER OF RECORD’s IN EMP TABLE?
SELECT * FROM EMP MINUS SELECT * FROM EMP WHERE ROWNUM<=(SELECT COUNT(*) -&N FROM EMP);
20)  DISPLAY TOP 10 SALARIES OF EMP TABLE?
SELECT * FROM (SELECT * FROM EMP ORDER BY SAL DESC) WHERE ROWNUM<=10;
21)  DISPLAY DEPT WISE  AVG(SAL) FROM EMP TABLE?

SQL> SELECT DEPTNO,AVG(SAL) FROM EMP GROUP BY DEPTNO;

    DEPTNO   AVG(SAL)
---------- ----------
        10 24233.3333
        20 2647.22222
        30       1900
22)  ELIMINATE THE DUPLICATE RECORD’s WITH OUT USING DISTINCT?

DELETE FROM EMP WHERE ROWID NOT IN (SELECT MAX(ROWID) FROM EMP GROUP BY ENAME);

23)   DISPLAY 5th  MAXIMUM SALRY OF EMPTABLE?

SELECT  * FROM(SELECT DENSE_RANK() OVER(PARTITION BY DEPTNO ORDER BY SAL DESC) AS RNK, EMP.* FROM EMP ) WHERE RNK=5;

24)  WRITE A QUERY TO FIND THE HIGHEST SALARY EARNED BY AN EMPLOYEE IN EACH DEPARTMENT AND ALSO THE NUMBER OF EMPLOYEES WHO EARN THE HIGHEST SALARY?
SELECT DEPTNO,
 MAX(SAL) HIGHEST_SALARY,
 COUNT(1) KEEP(DENSE_RANK LAST ORDER BY SAL) CNT_HIGH_SAL
FROM EMP
GROUP BY DEPTNO;
                                                   (OR)
SELECT DEPARTMENT_ID,
 MAX(SALARY) HIGHEST_SALARY,
 COUNT(1) KEEP(DENSE_RANK LAST ORDER BY SALARY) CNT_HIGH_SAL
FROM EMPLOYEES
GROUP BY DEPARTMENT_ID;
25)   WRITE A QUERY TO GET THE TOP 2 EMPLOYEES WHO ARE EARNING THE HIGHEST SALARY IN EACH DEPARTMENT?
SELECT DEPTNO, EMPNO, SAL FROM
 (
SELECT DEPTNO, EMPNO, SAL,
ROW_NUMBER() OVER(PARTITION BY DEPTNO ORDER BY SAL DESC ) R
FROM EMP
) A WHERE R <= 2;

26)   WRITE A QUERY TO FIND THE EMPLOYEES WHO ARE EARNING MORE THAN THE AVERAGE SALARY IN THEIR DEPARTMENT? 

SELECT EMPNO, SAL FROM EMP E_O WHERE SAL >
(SELECT AVG(SAL) FROM EMP E_I WHERE E_I.DEPTNO = E_O.DEPTNO ); 
Thanks
Ur's Hari

If you like this post, please share it by clicking on g+1 Button.


Wednesday, 7 May 2014

Informatica Interview Questions set-7

Difference between Informatica 8.x and 9.x?


In 9.0 version on wards

  • Look up cache can be updated based on the results of the expression.
  • When a database deadlock happens, in 9.0 retry attempts can be made in a look up.
  • Multiple rows can be returned from a look up.
  • Limit the size of the session logs
  • Auto – commit can be configured.
Source level performance tuning in Informatica?


In Source Qualifier
  • Use Indexes on source table key columns
  • Use SQL overrides to include joins and filters
  • Hints for larger tables
  • Use Exists instead of In
  • Limit the column rows by filter conditions in SQ.
  • Check the SQL Override Query cost.

Mapping level performance tuning in Informatica?


In look ups:
  • Use lookup override SQL to limit the rows.
  • Create Indexes for joining columns in look up tables.
  • Use Persistent cache if the lookup is used in multiple places.

In Joiners:
  • Use joins only when the tables are heterogeneous.
  • Smaller of two joining has to be master.

In Filters:
  • Try to push the filter conditions to source.
  • Otherwise use it near source.
  • Sequence generator:
  • Cache the sequence generator.

In Sessions:
  • Disable high precision values if not used.
  • Use Terse mode for log tracing.
In Expression:
  • Use variables for the same calculations.
  • Reduce the nested if conditions.
  • Remove the unnecessary type conversions

Target level performance tuning in Informatica?

  • Use Bulk load
  • Drop Indexes and constraints before loading
  • Use partitions in Oracle tables.
  • Avoid deadlocks
  • Analyze the tables
  • Reduce the commit inter level
  • Increase Database network packet size

Informatica Parameter and Variable differences?

  • Parameter can not be changed during run time and assigned in a parameter file.
  • Variable can be changed during run time and assigned in run time.
  • Parameters split into mapping, session, system parameters
  • Variables split into mapping, session variables
Difference between Unconnected and Connected look ups?
What is bulk load, difficulties & precautions involved?

  • When a target doesn’t require any recovery during failure
  • If the data is huge to load then the bulk load can be used
  • Bulk load does not write a database log, so it improves the target loading performance.
  • So if the recovery is needed in case of failure then the table has to be truncated and reloaded.
Which is better? Bulk loading or normal load?
  • Bulk loading is better with respect to performance.
  • Normal load is better with respect to failure recovery.
How to select distinct records from a flat file?
  • Use sorter and select distinct option in informatica mapping.
  • Otherwise use aggregator and group by with the key columns.
How to select distinct records using Informatica?
  • Select distinct option in the source qualifier, if source is a database tables.
  • Use sorter and select distinct option, is source is a flat file.
How to load multiple source files into one target table?
  • For this, at Session, in Mapping tab, Under Sources, Select Source Qualifier and in right side pan, change Source Filetype property to Indirect
  • Mention List File Name (for example MY_FILES.LST) for the property Source File Name.
What is the way to load multiple target files dynamically?
  • If the no. of targets are unknown then use Transaction Control transformation.
  • otherwise use router to do the same.
Types of caches in Informatica?
  • Static Cache
  • Dynamic Cache
  • Persistent Cache
Difference between Static, Dynamic and Persistent caches?
  • Static Cache -  is created at run time and can not be changed until the run the is completed.
  • Dynamic Cache  - is created at run time and can be changed during run time. 
  • Persistent Cache - is created at the start the of the workflow and can not be changed until the load is completed unless manually deleted. And it is used between sessions without reloading the cache.
Is there a possibility to return more than one value from an unconnected loo up?
        
        Yes, by concatenating the columns and return as a single port.

What is active and passive transformation?
  • Active transformation - is the one which changes outgoing rows. Eg. - Filter, Source Qualifier.
  • Passive transformation - is the one which does not changes the outgoing rows. Eg. Expression transformation.
  • How to load the unique records into one table and a duplicate records into another table?

    Use the below mentioned logic.
    Src-->SQ-->Agg-->Rtr--> Tgt1(Unique Records)
                                |--> Tgt2(Duplicate Records)
    • In the aggregator transformation group by with primary key and count value.
    • In the router redirect to Tgt1 if count=1 else redirect the records to Tgt2.

    How to identify top 2 salaries without using rank transformation?

    Use the below mentioned logic.
    Src-->SQ-->Srt-->Exp--> Fil --> Tgt

    • In the Sorter, sort by salaries in descending order
    • Assign row_no  from 1 in the expression and increase the row_no for the next records by 1 like 1,2,3......etc
    • In the filter condition filter with row_no<=2.
    There are 3 records as mentioned below.

    Dept_Id Dept_No
    101        10
    102        10
    103        10

    How many records will be inserted into target table?

    If there are no unique constraints defined on Dept_No column on the target table then 3 records will be inserted, else only one record will be inserted.


    Table1 has 1000 records and Table2 has 1,00,000 records.
    When a joiner transformation is used which one should be taken as master source and how?

    Table1 has to be taken as a master source, because Informatica integration service compares each row of a master table with detail table.




  • How will you identify duplicates in Oracle table?

    Use the below mentioned query.

    select cust_no, count(1) from customer 
    group by cust_no
    having count(1)>1;

    How to select unique records in a table?

    Use the below mentioned query.

    select distinct cust_no from customer;

    How to delete duplicate records in a table?

    Use the below mentioned query.

    delete from customer 
                     where rowid in
                      (select rowid from 
                          (select rowid, row_number() over (partition by cust_no order by cust_no) dup
                            from customer)
                        where dup > 1);

    What are the types of Indexes?

    1. B-Tree Indexes - Constructed similar to binary trees, used for faster retrieval of unique values.
    2. Bitmap Indexes - Used for repeating values, single bitmap used to point to many rows.
    3. Function Based Indexes - Used to store computed values, 
    4. Partition Indexes - Used in partitioned tables.

    What are the types of Partitions in Oracle?

    1. Range Partition - Partitions created based on the particular range. For eg. Date, Numbers
    2. List Partition - Partitions created based on the listed values like country, department.
    3. Hash Partition - Partitions created based on the hash functions.

  • How to find the top 3 earnings in each department?

    select * from (
                   select empno, deptno, rank() over(partition by deptno order by sal desc) as r
                                                                                                          from emp e
                      ) t
    where t.r<=3;

    What are the analytical functions in Oracle?

    • Rank
    • Avg
    • Count
    • FirstValue
    • LastValue


    Difference between function and a procedure?

    Function returns a value, but procdure does not return a value but returns through Out parameters.

    What are the wildcards used in pattern matching?

    '_' is used for single character and '%' is used for multiple characters.

    Which system table contains all the constraints information in Oracle?

    "USER_CONSTRANTS" table.

  • How to make Informatica Code Reusable?

    The following are the components to make Informatica code reusable which can reduce development time.

    1. Mapplet
    2. Reusable Transformation
    3. Shared folder
    4. Global Repository
    5. Worklet
    6. Reusable Session
    7. Reusable Task
    8. Mapping Parameter
    9. Mapping Variable
    10. Workflow Variable
    11. Worklet Variable
    12. Source and Target Connections
    13. Session variables
    14. Service Variables.

    What are the performance coding standards?

    1. Turn off verbose logging
    2. Turn off 'Collect performance statistics', after performance testing
    3. Try not to read a file over network.
    4. Lookups and aggregators slow downs the performance because they involve in caching. Try to avoid the aggregators and check the Index and data cache size.
    5. Consider partitioning of files where ever required.
    6. Try to do push down optimization.
    7. Use variables/reusable transformations for repeated calculations


    Other coding standards?

    1. There should not be any unused ports.
    2. Connection information of the look up should be $Source and $Target
    3. When the strings are read from the source, they should be trimmed(LTRIM and RTRIM)
    4. Where ever the lookup output ports are received, null values should be handled properly.
    5. Common source and Target tables has to be placed under Common folder.
    6. For output only ports, all error messages must be removed.
    7. Check the status of pmcmd command, if non-zero then exit.
    8. All the components should be in commented.
    9. The mapping should not contain any unused sources or targets.
    10. Enable high precision should be disabled.
    11. Stop on error should be greater than '0'.
    12. The port names should be as per the standards.
    13. Nested if's has to be replaced with Decoded.
    14. Stored procedure should always return a value which can be used in Informatica to evaluate the success of the procedure.

    Name few naming conventions of Informatica?

    Repository Objects
    Naming Convention
    Mapping Name
    m_<target table name>  (Examples:  m_PERSON)
    Session Name
    s_<mapping name> (Example: s_m_PERSON)
    Folder Name
    Folder names for new users will be their department followed by their username. Existing users’ folder names are typically their group followed by their user name. Folder Names may also be based on project, subject area, and promotion group.
    Log Files
    <session_name>.log OR <workflow_name>.log
    Bad Files
    <session_name>.bad
    Source-name & Target-name
    Do not make any distinction here - Informatica color-codes the sources and targets, so it's easy to tell. We do try to prefix these with a folder name, so that when shortcuts or global objects are created we can tell where they originally reside.
    Workflow Names
    wflw_<meaningful name>


    Workflow Objects
    Naming Convention
    Worklet Names
    wklt_<meaningful name>
    Command Line Names
    cmd_<meaningful name>
    Event Names
    evtw_<meaningful name> for a wait event
    evtr_<meaningful name> for a raise event
    Decision Names
    dcsn_<meaningful name>
    Control Names
    cntrl_<meaningful_name>
    Email Names
    email_<meaningful_name>
    Assignment Names
    asgmnt_<meaningful_name>
    Timer Names
    tmr_<meaningful_name>
Thanks
Ur's Hari
If you like this post, please share it by clicking on g+1 Button.