If the Mapping Type is… | Complete these fields… |
Delimited File or Data Dictionary Import | Mapping File. Select the input file. Supported file types are CSV, XML, XSD, DDI. Delimiter. The delimiter character that separates columns. Text Qualifier. The character that separates columns when the field values contain the delimiter. For example, if the delimiter is a comma and the field values include commas, specify a text qualifier such as double quotes (“). Header Row Present at Line. The line number of the header row in the input file. Default value is 1. Number of columns (if no header present). The number of columns in the file if a header row does not exist. Encoding. The encoding of the input delimited file. Only applicable to delimited files. Preselect Source Attributes. Select to map column names in the input file as source attributes in the object. |
Remote Database | Connection. Select the connection to the source database. Schema. Select the schema. Entity. Select the source object. |
XML/XSD File | Mapping File. Select the XML/XSD file containing the data. |
JSON/JSON Schema File | Mapping File. Select the JSON schema file containing the data. Preselect Source Attributes. Select to map column names in the input file as source attributes in the object. Optional. |
Option | Description |
Operator | Specify the operator. You cannot combine AND and OR in the filter criteria. |
Entity | Select the object on which to apply the filter. OneData objects cannot contain a semi-colon (;) in the object name because semi-colons are the filter delimiter. If needed, the semi-colon can be specified in the object display name. |
Attribute | Select the column on which to apply the filter. This field also includes DECODE columns. Select the DECODE column, if you want to select from the description columns mapped to a reference column. |
Description Attribute | Select the description column. If a DECODE column is selected in the Attribute, this field is populated with the columns of the related object. Ensure that the Description Attribute selected in the filter is correctly mapped in the Mapping Attributes section. |
Filter Value | Select the operator condition and filter value. Equals and not equals are case insensitive comparisons. Lesser than and greater than apply to numeric values (without decimals) and dates. You can add multiple values separated by comma. Click the arrows to add each condition to the filter. To add multiple conditions, modify the filter definition and add each new condition to the filter. The filter definition cannot be edited. To remove the last filter criteria added, click Undo. |
The following restrictions apply to filters: The operand value must match the data type of the column. For example, you cannot enter a text value for a numeric column. Dates must be in the same format as defined in the Mapping Attributes. For remote database imports, use YYYY-MM-DD date format. Timestamps must have the OneData supported time portion. Enter multiple conditions to select a range of data. For example, to import only the rows with EMP_ID greater than 3 and less than 10, specify EMP_ID > 3 AND EMP_ID <10. Applying a child filter in a hierarchy excludes or includes the entire row (including parent). The following is an example of the filter syntax: <Object_Name>.<Column_Name><Operator><Filter_Value> or <Object_Name>.<ForeignKey_Name>#<Desc_Attribute> <Operator><Filter_Value> |
Option | Description |
Processing Mode | Select how OneData processes the data in the import file for the selected object. Insert only (data objects). Only perform inserts of new data. Update existing entries. Update existing and insert new data. Update only (data objects). Only perform updates to the existing data. Delete. Delete matching data. No other action is performed. Restore. Restore logically deleted data. No other action is performed. Purge. Purge logically deleted data. No other action is performed. |
External DQ Mode | Select Enable to perform the cleansing and matching processes while executing this interchange mapping. This loads the data from a source to a consolidation staging object. The mapping from Staging to Gold must be defined. Applicable only for consolidation objects. |
Data Compare Criteria | Applicable only for the update only or update existing entries processing modes. Perform Case sensitive or Case insensitive updates. Case sensitive updates reduce delays while importing. |
Execution Batch Size | Size of the execution batch that is forwarded to the exception queue when a partial commit is enabled. This overrides the size configured in the OneData property, onedata.interchangeexecution.batchsize. By default, OneData processes the records in batches of 100 and performs the commit at the end of the import process. For more information about configuring onedata.properties file, see Administering webMethods OneData. Note: This field is not available for JSON/JSON Schema File imports. |
Enable Partial Commit | When selected, OneData commits data at the end of each batch, even if the import does not complete successfully. This means that the records that are inserted or updated at the connection level are committed to the database. Important: This must be selected to capture exceptions (if any) to the Exception queue during an import. Note: This field is not available for JSON/JSON Schema File imports. |
Enable Multiple Connection Usage | When selected, uses isolated connections within an execution batch and automatically enables partial commits at the batch level. Note: This field is not available for Data Dictionary and JSON/JSON Schema file imports. |
Exception Queue Automatic Retrigger | Applicable only if the OneData property Enable Partial Commit is true. Select to capture only faulty records in a batch to the Exception queue. Clear this option to capture all records in a batch to the Exception queue. Note: This field is not available for JSON/JSON Schema File imports. |
Skip columns with zero-length value | Select to skip processing of columns with zero-length values. If selected for a conceptual object, all constituent objects must have at least one record in each level. A missing value in any level causes unexpected results. For example, in a Country-State-City conceptual object, select this option only if all the objects have at least one record in the source object, as in the following example: <!--Record 1--> <Country>Country_Name_1 <State>State_Name_1 <City>City_Name_1</City> </State> </Country |
Accept Date into Timestamp Column | Select to convert a date column to the OneData timestamp format when the date column is mapped to a timestamp column. Note: A validation error occurs if the date in the source contains only partial time portions. |
Ignore missing parent reference | Select to avoid exceptions when a parent reference is not provided in the input. |
Bypass temporal validation | Select to ignore temporal validation exceptions in temporal objects. |
Bypass regular expression and Rules Engine | Select to ignore regular expression and rules engine validations during an import. |
Bypass Workflow | Select to ignore workflow rules during an import. |
Bypass Data Audit | Select to skip adding entries into data audit trails for the data being imported. |
Start import at row | The starting row number at which to begin the import. Not applicable for remote database or XML file imports. |
Service Audit Log Level | Select the logging level for inbound data import service requests through REST, Interchange Mapping, and JMS: No Service Logging. No logs are created. Basic. Provides a summary of the data import process, including, status of the process, duration of the process execution, data source, and so on. Detailed. The default log level. It includes basic logging information and details of data import at the record level. Advanced. Includes detailed logging information, and additional information about payload and interchange mapping, that is, the mapping of payload to OneData entities. |
Source System | Enter a description about the source system of the payload. |
Enforce where clause on remote entity (Optional) | Build a where condition to restrict which data is imported. The Remote Database import job applies this condition to the source data to fetch the values. Then the filter, if specified, is applied to the resulting values. Applicable only for remote database imports. |
Number of Threads | Depending on whether you want to enable or disable concurrent processing for objects that have optional database sequence as the primary key, use one of these options: 1. To disable concurrent processing. Leave the field blank to enable concurrent processing with the default pool size defined. Note: This field is not available for JSON/JSON Schema File imports. |
Detail | Description |
Profile Name | The name of the interchange mapping profile. |
Profile Description | The description of the profile. |
Additional Row Count | Adds an additional column to the object during the import. For example, to add a column in the object that does not exist in the import file, you can use this field and specify a default value. Add the row while mapping the attributes and setting default value (explained in next step). |
Set | Click Set to define additional rows. |
Detail | Description |
Source Attribute | The name/number of column in the input file. |
Destination Entity | The OneData object into which to map the column. For example, for a conceptual object, multiple objects can be viewed and attributes mapped to the required object. |
Destination Attribute | Select the attribute into which to map the Source Attribute. |
Description Attribute | If the Destination Attribute is a DECODE of any attribute in the entity, all the attributes in that entity are populated in this field for this decoded value. While using DECODE, if the column maps to more than one record from the object the import results in error. |
Default Value | Specify a default value. This value is only used when the source and destination attributes are null. |
Constant Value | Specify a constant value. When the source attribute is null, the destination attribute is updated with the constant value, regardless of whether the destination attribute was null. |
Format | Specify the format for the attribute value. |
Python Transformation | If required, input the python script for deriving the required detail from the Source Attribute. |