Managing Master Data with webMethods OneData 10.5 | Managing Master Data with webMethods OneData | Implementing webMethods OneData | Working with Interchange Mappings | Managing Interchange Mapping Profiles
Managing Interchange Mapping Profiles
An interchange mapping is a reusable mapping to import data in OneData from an external source. You can create an interchange mapping to import data from a delimited file, remote database, XML, or JSON file. To import from a remote database, ensure that you have already configured the database connection. For information about configuring connections, see Administering webMethods OneData.
You can also create interchange mappings to perform data quality tasks, such as data quality cleansing and data quality matching. Once you create the mapping, you can use the mapping in an import job. Saved jobs can be scheduled through the Job Center. For information about data quality with OneData, see webMethods OneData Consolidation MDM Guide.
Once you create an interchange mapping profile, you can use it in any scheduled import job or to manually import data. For information about using an interchange mapping in a scheduled job, see Importing Data into Objects. For information about using an interchange mapping for manual data import, see Using an Interchange Mapping to Manually Import Data.
* To manage an interchange mapping profile
1. On the Menu toolbar, click Data Interchange > Configuration > Interchange Mapping.
2. Click Add New Mapping to create a new mapping.
Or, if want to work with an existing profile, you can edit or clone it. To do so, click the Edit icon or the Clone icon of the profile you want to use. Then follow the instructions in this procedure to edit field descriptions to edit the profile.
To delete the profile, click the Delete icon of the profile and then click OK. You can only delete profiles that are not associated with an import job.
3. In Mapping Type, select the input source:
*Delimited File. Import data from a delimited file.
*Remote Database. Import data from a table in a remote database.
*XML/XSD File. Import data from an XML or XSD file.
*JSON/JSON Schema File. Import data from a JSON data or schema file.
*Data Dictionary Import. Import data from a data dictionary.
Note: Data Dictionary option is available only if you have installed OneData MDR.
4. In Source Information, complete the fields corresponding to the mapping type.
If the Mapping Type is…
Complete these fields…
Delimited File or Data Dictionary Import
*Mapping File. Select the input file. Supported file types are CSV, XML, XSD, DDI.
*Delimiter. The delimiter character that separates columns.
*Text Qualifier. The character that separates columns when the field values contain the delimiter. For example, if the delimiter is a comma and the field values include commas, specify a text qualifier such as double quotes (“).
*Header Row Present at Line. The line number of the header row in the input file. Default value is 1.
*Number of columns (if no header present). The number of columns in the file if a header row does not exist.
*Encoding. The encoding of the input delimited file. Only applicable to delimited files.
*Preselect Source Attributes. Select to map column names in the input file as source attributes in the object.
Remote Database
*Connection. Select the connection to the source database.
*Schema. Select the schema.
*Entity. Select the source object.
*Mapping File. Select the XML/XSD file containing the data.
JSON/JSON Schema File
*Mapping File. Select the JSON schema file containing the data.
*Preselect Source Attributes. Select to map column names in the input file as source attributes in the object. Optional.
5. In Object Mapping, select the OneData Object Type and the object in which to import the data.
If the object type is a conceptual object, click Select Constituent Objects and select the related objects. By default, only the root object is selected.
6. In Filter, specify a filter to limit the data to import. If you specify a filter, only data that meets the filter criteria is imported into the destination object. Click the + symbol to expand the filter and set the filter criteria.
Note: For conceptual objects, the rows from parent object are included only when they satisfy the child filter. Parent objects without any child objects are also excluded.
Specify the operator. You cannot combine AND and OR in the filter criteria.
Select the object on which to apply the filter. OneData objects cannot contain a semi-colon (;) in the object name because semi-colons are the filter delimiter. If needed, the semi-colon can be specified in the object display name.
Select the column on which to apply the filter. This field also includes DECODE columns. Select the DECODE column, if you want to select from the description columns mapped to a reference column.
Description Attribute
Select the description column. If a DECODE column is selected in the Attribute, this field is populated with the columns of the related object. Ensure that the Description Attribute selected in the filter is correctly mapped in the Mapping Attributes section.
Filter Value
Select the operator condition and filter value. Equals and not equals are case insensitive comparisons. Lesser than and greater than apply to numeric values (without decimals) and dates.
You can add multiple values separated by comma. Click the arrows to add each condition to the filter. To add multiple conditions, modify the filter definition and add each new condition to the filter. The filter definition cannot be edited. To remove the last filter criteria added, click Undo.
The following restrictions apply to filters:
*The operand value must match the data type of the column. For example, you cannot enter a text value for a numeric column.
*Dates must be in the same format as defined in the Mapping Attributes. For remote database imports, use YYYY-MM-DD date format.
*Timestamps must have the OneData supported time portion.
*Enter multiple conditions to select a range of data. For example, to import only the rows with EMP_ID greater than 3 and less than 10, specify EMP_ID > 3 AND EMP_ID <10.
*Applying a child filter in a hierarchy excludes or includes the entire row (including parent).
The following is an example of the filter syntax:
Tip: To clear filter conditions, click Undo until all filter criteria is cleared. If you change the object in the Object Mapping. click Refresh Filter to repopulate the entity list.
7. In Processing Options, define processing options for the import profile.
Processing Mode
Select how OneData processes the data in the import file for the selected object.
*Insert only (data objects). Only perform inserts of new data.
*Update existing entries. Update existing and insert new data.
*Update only (data objects). Only perform updates to the existing data.
*Delete. Delete matching data. No other action is performed.
*Restore. Restore logically deleted data. No other action is performed.
*Purge. Purge logically deleted data. No other action is performed.
External DQ Mode
Select Enable to perform the cleansing and matching processes while executing this interchange mapping. This loads the data from a source to a consolidation staging object. The mapping from Staging to Gold must be defined. Applicable only for consolidation objects.
Data Compare Criteria
Applicable only for the update only or update existing entries processing modes. Perform Case sensitive or Case insensitive updates. Case sensitive updates reduce delays while importing.
Execution Batch Size
Size of the execution batch that is forwarded to the exception queue when a partial commit is enabled.
This overrides the size configured in the OneData property, onedata.interchangeexecution.batchsize. By default, OneData processes the records in batches of 100 and performs the commit at the end of the import process.
For more information about configuring file, see Administering webMethods OneData.
Note: This field is not available for JSON/JSON Schema File imports.
Enable Partial Commit
When selected, OneData commits data at the end of each batch, even if the import does not complete successfully. This means that the records that are inserted or updated at the connection level are committed to the database.
Important: This must be selected to capture exceptions (if any) to the Exception queue during an import.
Note: This field is not available for JSON/JSON Schema File imports.
Enable Multiple Connection Usage
When selected, uses isolated connections within an execution batch and automatically enables partial commits at the batch level.
Note: This field is not available for Data Dictionary and JSON/JSON Schema file imports.
Exception Queue Automatic Retrigger
Applicable only if the OneData property Enable Partial Commit is true. Select to capture only faulty records in a batch to the Exception queue.
Clear this option to capture all records in a batch to the Exception queue.
Note: This field is not available for JSON/JSON Schema File imports.
Skip columns with zero-length value
Select to skip processing of columns with zero-length values. If selected for a conceptual object, all constituent objects must have at least one record in each level. A missing value in any level causes unexpected results.
For example, in a Country-State-City conceptual object, select this option only if all the objects have at least one record in the source object, as in the following example:
<!--Record 1-->
Accept Date into Timestamp Column
Select to convert a date column to the OneData timestamp format when the date column is mapped to a timestamp column.
Note: A validation error occurs if the date in the source contains only partial time portions.
Ignore missing parent reference
Select to avoid exceptions when a parent reference is not provided in the input.
Bypass temporal validation
Select to ignore temporal validation exceptions in temporal objects.
Bypass regular expression and Rules Engine
Select to ignore regular expression and rules engine validations during an import.
Bypass Workflow
Select to ignore workflow rules during an import.
Bypass Data Audit
Select to skip adding entries into data audit trails for the data being imported.
Start import at row
The starting row number at which to begin the import. Not applicable for remote database or XML file imports.
Service Audit Log Level
Select the logging level for inbound data import service requests through REST, Interchange Mapping, and JMS:
*No Service Logging. No logs are created.
*Basic. Provides a summary of the data import process, including, status of the process, duration of the process execution, data source, and so on.
*Detailed. The default log level. It includes basic logging information and details of data import at the record level.
*Advanced. Includes detailed logging information, and additional information about payload and interchange mapping, that is, the mapping of payload to OneData entities.
Source System
Enter a description about the source system of the payload.
Enforce where clause on remote entity (Optional)
Build a where condition to restrict which data is imported. The Remote Database import job applies this condition to the source data to fetch the values. Then the filter, if specified, is applied to the resulting values. Applicable only for remote database imports.
Number of Threads
Depending on whether you want to enable or disable concurrent processing for objects that have optional database sequence as the primary key, use one of these options:
1. To disable concurrent processing.
*Leave the field blank to enable concurrent processing with the default pool size defined.
Note: This field is not available for JSON/JSON Schema File imports.
8. Click Next.
9. You can select an alternate primary key if needed. Click Next.
The Mapping screen appears.
10. Enter interchange mapping details as follows:
Note: The source attributes are predefined. An additional row can be added to map a column even if it does not exist at source.
Profile Name
The name of the interchange mapping profile.
Profile Description
The description of the profile.
Additional Row Count
Adds an additional column to the object during the import. For example, to add a column in the object that does not exist in the import file, you can use this field and specify a default value. Add the row while mapping the attributes and setting default value (explained in next step).
Click Set to define additional rows.
11. Map the columns between the source and destination objects.
Source Attribute
The name/number of column in the input file.
Destination Entity
The OneData object into which to map the column. For example, for a conceptual object, multiple objects can be viewed and attributes mapped to the required object.
Destination Attribute
Select the attribute into which to map the Source Attribute.
Description Attribute
If the Destination Attribute is a DECODE of any attribute in the entity, all the attributes in that entity are populated in this field for this decoded value.
While using DECODE, if the column maps to more than one record from the object the import results in error.
Default Value
Specify a default value. This value is only used when the source and destination attributes are null.
Constant Value
Specify a constant value. When the source attribute is null, the destination attribute is updated with the constant value, regardless of whether the destination attribute was null.
Specify the format for the attribute value.
Python Transformation
If required, input the python script for deriving the required detail from the Source Attribute.
12. After you provide the mappings, click Save.
Note: If the destination object contains a sequence column, OneData notifies you that the value imported for this attribute will be overwritten by the sequence value generated after import.
13. Once the mapping is complete, you can perform an immediate import or create a scheduled import job and assign this mapping to the import job step.

Copyright © 2011-2019 | Software AG, Darmstadt, Germany and/or Software AG USA, Inc., Reston, VA, USA, and/or its subsidiaries and/or its affiliates and/or their licensors.