PDF Google Drive Downloader v1.1


Report a problem

Content text IICS Latest Questions 2024.pdf

Informatica IDMC Group: https://nas.io/infaidmc Informatica Community Group https://nas.io/infacareers Informatica IICS Interview Questions Jan 2024
Informatica Cloud Interview Questions Jan 2024 Informatica IDMC Group: https://nas.io/infaidmc Informatica Community Group: https://nas.io/infacareers LEVEL-1 (MAPPINGS) 1. What are CDI mappings and how can they be created in IICS? Answer: CDI mappings in IICS, or Cloud Data Integration mappings, refer to configurations that define data movement from sources to targets. To create a mapping in IICS, one must select source and target connections, design the mapping layout by dragging and linking objects, set field mappings, validate the mapping, and execute the task after ensuring successful validation and monitoring. 2. What are active and passive transformations in IICS ? Answer: Active transformations are those that can change the number of rows passing through the transformation. Examples include Filter, Sorter, and Router transformations, altering data flow. Passive transformations in IICS maintain the number of rows passing through unchanged. Examples include Expression and Sequence Generator transformations, leaving the row count unaltered within the transformation. 3. How can the following transformations be used in IICS? i. Expression ii. Sorter iii. Joiner iv. Lookup v. Union vi. Router vii. Filter viii. Rank ix. Aggregator x. Normalizer xi. Transaction Control Answer: Using these transformations in Informatica Intelligent Cloud Services (IICS) Expression Transformation: • Drag the Expression transformation onto the mapping canvas. • Configure it by adding ports, defining expressions, conditions, and transformations on incoming data. Sorter Transformation: • Place the Sorter transformation in the mapping. • Specify the columns for sorting and set the sort order (ascending/descending) in its properties. Joiner Transformation: • Add the Joiner transformation to the mapping. • Define join conditions by linking ports from different source pipelines and specify join type (e.g., Inner, Outer) in properties. Lookup Transformation: • Drag and drop the Lookup transformation onto the mapping. • Configure connection details, define lookup conditions, and map output ports to target fields. Union Transformation: • Insert the Union transformation into the mapping. • Link data pipelines and specify the operation type (e.g., Union, Union All) in its properties. Router Transformation: • Place the Router transformation on the mapping canvas. • Define conditions to direct data rows to different output groups based on criteria. Filter Transformation: • Add the Filter transformation to the mapping. • Specify filter conditions to pass only the required rows based on defined criteria. Rank Transformation: • Drag and drop the Rank transformation in the mapping. • Configure rank properties such as rank port, rank index, and group by columns to assign ranks. Aggregator Transformation: • Insert the Aggregator transformation into the mapping. • Define aggregate functions (sum, count, average, etc.) for specified columns and group by criteria.
Informatica Cloud Interview Questions Jan 2024 Informatica IDMC Group: https://nas.io/infaidmc Informatica Community Group: https://nas.io/infacareers Normalizer Transformation: • Place the Normalizer transformation on the canvas. • Configure settings to pivot multiple rows into a single row or vice versa. Transaction Control Transformation: • Drag the Transaction Control transformation to the mapping. • Set transaction properties (commit, rollback) based on conditional criteria or workflow requirements. 4. What are the major differences between the following assets in IICS? a. Join and Lookup b. Connected and Unconnected lookup c. Union vs File list d. IIF vs Decode function e. SQL override and Lookup override f. Data cache and Index cache g. Hierarchical parser and structural parser h. Input and IN/OUT Parameters i. SCD Type 1,2,3 j. Fatal and Nonfatal errors k. Upsert and data-driven Answer: Join: Combines data from different sources based on specified conditions. Lookup: Retrieves data from a relational table based on lookup conditions and incorporates it into the data flow. Connected and Unconnected Lookup: Connected Lookup: Receives input directly from the mapping pipeline and returns output to the mapping. Unconnected Lookup: Called within an expression and returns a single column value that can be used in an expression. Union vs File list: Union: Merges data from different sources vertically (stacks rows) within a single data pipeline. File List: Generates a list of files from a specified directory or FTP location, allowing processing of multiple files in a session. IIF vs Decode function: IIF: Conditional function in IICS similar to an "if-else" statement in programming languages. Decode: Function used for conditional evaluation and transformation of data based on specified conditions similar to a "switch-case" statement in programming. SQL override and Lookup override: SQL Override: Allows modification of SQL queries in Source Qualifier or Lookup transformation to customize data retrieval or manipulation. Lookup Override: Specifies the lookup condition or the SQL query that is used to retrieve data within a Lookup transformation. Data cache and Index cache: Data Cache: Stores data rows retrieved from a lookup table or target source, enhancing performance by reducing the number of database hits. Index Cache: Stores index information for faster lookup operations, optimizing data retrieval based on indexed columns. Hierarchical parser and structural parser: Hierarchical Parser: Used to parse hierarchical data formats like XML or JSON, breaking them into structured data. Structural Parser: Handles flat or structured data formats and separates them into individual fields or columns. Input and IN/OUT Parameters: Input Parameters: Pass values into a mapping or task but cannot be modified during the execution. IN/OUT Parameters: Pass values into a mapping or task and can be modified during the execution. SCD Type 1, 2, 3: SCD Type 1: Overwrites old data with new data, preserving only the most recent information. SCD Type 2: Maintains historical versions by creating new records for changes and retaining the old ones with effective date ranges. SCD Type 3: Keeps both old and new data by adding attributes to represent changes, allowing tracking of some historical information. Fatal and Non-fatal errors: Fatal Errors: Cause sessions or workflows to stop immediately due to critical errors affecting data integrity or processing.
Informatica Cloud Interview Questions Jan 2024 Informatica IDMC Group: https://nas.io/infaidmc Informatica Community Group: https://nas.io/infacareers Non-fatal Errors: Allow sessions or workflows to continue with data processing despite encountering errors, which can be handled or ignored based on the configuration. Upsert and Data-driven: Upsert: Performs an update if a record exists or inserts a new record if it doesn't exist in the target system. Data-driven: Allows mappings or tasks to be controlled dynamically based on metadata or data-driven conditions, altering the execution flow accordingly. 5. What are the different methods to remove duplicates in CDI? Answer : Cloud Data Integration (CDI), there are several methods to remove duplicates from data: Aggregator Transformation: Use the Aggregator transformation with the "Group By" option on specific columns to group data and eliminate duplicates within groups. Sorter Transformation: Sort data using the Sorter transformation on columns containing duplicates and then use a subsequent expression or filter to remove consecutive duplicate rows. Expression Transformation: Utilize the Expression transformation to create an expression that checks for duplicate values in specific columns and filters out rows with duplicate values. Database-specific Techniques: Leverage SQL functions like DISTINCT, GROUP BY, or ROW_NUMBER() OVER() in SQL Override to remove duplicates directly in the source query. Using the "Distinct" Option: In mappings, use the "Distinct" option in Source Qualifier or other transformation properties to eliminate duplicate rows while pulling data from sources. Java Transformation: Develop a custom transformation using Java to scan and remove duplicates based on specific criteria or logic within the data flow. Using Lookup Transformation: Configure the Lookup transformation to perform duplicate checks by storing unique values in a cache and using a flag to filter out duplicates. Each method has its advantages and limitations based on the data characteristics, volume, and complexity of duplicate identification. The selection of the method depends on the specific requirements and efficiency needed for removing duplicates in the data integration process within IICS. 6. What is an indirect file load and how can it be implemented in IICS? Answer : Indirect file load is a method used in Informatica Intelligent Cloud Services (IICS) to dynamically load files into mappings without hardcoding the file names. It involves using a parameter or variable to specify the file name or location, allowing flexibility in loading different files dynamically. To implement indirect file load in IICS: Parameterize File Name: Create a parameter or variable within IICS to store the file name or path. Parameters can be defined at the mapping level or as global variables. Mapping Configuration: Configure the mapping by referencing the parameter/variable in the source or target file properties instead of specifying a fixed file name. Runtime Parameter Assignment: During runtime or when executing the mapping, provide the file name or path as a parameter value either manually or by dynamically passing it through a parameter file or parameter assignment task. Use Expression Transformation (Optional): If needed, use Expression transformation to manipulate or construct the file path based on specific criteria or conditions using the parameter value.

Related document

x
Report download errors
Report content



Download file quality is faulty:
Full name:
Email:
Comment
If you encounter an error, problem, .. or have any questions during the download process, please leave a comment below. Thank you.