webMethods Integration Cloud and Cloud Deployment Documentation 5.6.0 | webMethods Integration Cloud | Integrations, Built-In Services, REST APIs, SOAP APIs, Doc Types, and Recipes | Integrations | Orchestrated Integrations
 
Orchestrated Integrations
Orchestrated Integration is the process of integrating two or more applications together, to automate a process, or synchronize data in real-time. Orchestrated Integration enables you to integrate applications and provides a way to manage and monitor your integrations.
Integration Cloud supports advanced integration scenarios involving multiple application endpoints, complex routing, and Integrations involving multiple steps. Using a graphical drag and drop tool, you can create complex, orchestrated integrations and run them in the Integration Cloud environment.
* To create an orchestrated integration
1. From the Integration Cloud navigation bar, click Projects. The Projects screen appears.
2. Select a project in which you want to create the Integration. You can also create a new project. See Projects for more information.
3. To create a new Integration, from the Integrations screen, click Add New Integration.
4. Select Orchestrate two or more applications, and then click OK.
The user interface consists of a tool bar and a workspace. The tool bar holds all the available categories with blocks. You can browse through the menu of blocks and can set up your own Integration by plugging blocks together in the workspace. The menu of blocks comes with a large number of predefined blocks from Applications, Services, Integrations, conditions to looping structures. You can drag relevant blocks from the tool bar and drop them at the anchor point.
The tool bar has a large number of blocks for common instructions and the blocks are divided into the following categories:
*Applications
*Services
*Integrations
*Control Flow
*Expressions
Block category
Icons
Description
Applications
Displays the Applications available in Integration Cloud.
Services
Use the Service blocks (date, math, string, and so on) to specify the service that will be invoked at run time. Related services are grouped in blocks. You can sequence services and manage the flow of data among them.
Note: For information on the different services, see Built-In Services.
The Reference Data block appears only if a Reference Data service is available at the Projects > <Select a Project> > Reference Data page. See Reference Data for more information.
Integrations
Displays the list of Integrations created in Integration Cloud. You can invoke an Integration from another Integration. When copying integrations from one stage to another, all the referred Integrations and their dependents will also be copied.
Click the icon if you want to view or modify an Integration after it is dropped at the anchor point. The Integration will open up for editing in a new tab.
Click the icon and select Map Input and Output if you want to map the input of the operation from the Pipeline and also map the output of the operation into the pipeline.
Click Duplicate to repeat a block, click Collapse to flatten a block, click Delete to remove a block from the workspace, or click Disable to disable a block and all blocks within that block. If you disable blocks, those blocks will not be considered for execution, test, or debug operations.
Control Flow
Conditional expressions, looping structures, and transform pipeline.
Conditional expressions perform different computations or actions depending on whether a specified boolean condition evaluates to true or false.The if block is used to evaluate a boolean condition and if the condition is true, statements inside the if block are executed. The if statement can be followed by an optional else statement, which executes when the boolean expression is false.
The if statements are executed from the top towards the bottom. You can use one if or else if statement inside another if or else if statement(s). You cannot have multiple else statements.
Switch allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each case, that is, Switch evaluates a variable and skips to the value that matches the case. For example, if the Switch variable evaluates as "A", then case "A" is executed. A switch statement can have an optional default case, which must appear at the end of the switch. The default case can be used for performing a task when none of the cases are true. You cannot insert multiple default statements.
Note: You can include case steps that match null or empty switch values. A switch value is considered to be null if the variable does not exist in the pipeline or is explicitly set to null. A switch value is considered to be an empty string if the variable exists in the pipeline but its value is a zero length string.
Note: Switch executes the first case that matches the value, and exits the block.
The try catch block is used to handle errors and exceptions. If you have a statement in the try block that has thrown an error, the error will be caught in the catch statement.
Note: If an error is thrown inside the catch section of the try catch block, the error will be ignored and the next statements in the Integration will be executed.
Loops execute a set of steps multiple times based on the block you have chosen. It repeats a sequence of child steps once for each element in an array that you specify. For example, if your pipeline contains an array of purchase-order line items, you could use a Loop to process each line item in the array. Loop requires you to specify an input array that contains the individual elements that will be used as input to one or more steps in the Loop. At run time, the Loop executes one pass of the loop for each member in the specified array. For example, if you want to execute a Loop for each line item stored in a purchase order, you would use the document list in which the order’s line items are stored as the Loop’s input array.
The while loop is used to iterate a part of the program several times. If the number of iterations are not fixed, it is recommended to use the while loop.
The do-until loops are similar except that they repeat their bodies until some condition is true.
The for-each block traverses items in a collection. Unlike other for loop constructs, for-each loops usually maintain no explicit counter: they essentially say “do this to everything in this set”, rather than “do this x times”.
The Exit Integration signaling success block allows you to successfully terminate and exit from the currently running Integration. You cannot attach child blocks to the Exit Integration signaling success block.
The Exit Integration signaling failure "…" block abnormally terminates the currently running integration with an error message. You can specify the text of the error message that is to be displayed. If you want to use the value of a pipeline variable for this error message, type the variable name between % symbols, for example, %mymessage%. The variable you specify must be a String. You cannot attach child blocks to the Exit Integration signaling failure "…" block.
The Throw error "..." block can be attached inside any block except the catch section of the try catch block, and allows you to explicitly throw an exception with a custom error message. If it is used inside the try section of the try catch block, the error will be caught in the catch section. If you want to use the value of a pipeline variable for this custom error message, type the variable name between % symbols, for example, %mymessage%. The variable you specify must be a String. You cannot attach child blocks to the Throw error "..." block.
Note: If you add a Throw error "..." block inside a try catch block, any changes done to the pipeline variables inside the try block will be reset to the previous values existing in the pipeline.
The Break out of loop block should be used only within a loop and allows you to break out of the containing loop, that is, it allows you to break the program execution out of the loop it is placed in. You cannot attach child blocks to the Break out of loop block.
A Loop takes as input an array field that is in the pipeline. It loops over the members of an input array, executing its child steps each time through the loop. For example, if you have a Integration that takes a string as input and a string list in the pipeline, use Loops to invoke the Integration one time for each string in the string list. You identify a single array field to use as input when you set the properties for the Loop. You can also designate a single field for the output. Loop collects an output value each time it runs through the loop and creates an output array that contains the collected output values.
Use the Transform Pipeline block to make pipeline modifications. See Pipeline and Signatures for more information.
Expressions
Logical operations, comparisons, and values.
The six comparison operators are: equal to, not equal to, less than, less than or equal to, greater than, greater than or equal to. Each takes two inputs and returns true or false depending on how the inputs compare with each other.
The and block will return true only if both of its two inputs are also true. The or block will return true if either of its two inputs are true. The not block converts its Boolean input into its opposite.
You can also type a text value, select a field on which to build an expression (Select field), or select a block with no inputs.
The Field exists block allows you to check if a variable exists or not and can be used with other Control Flow blocks, for example, the if block. The Field exists block validates the existence of a particular field in the pipeline.
Note: It is recommended not to leave an input empty.
5. Provide a valid name and description for the Integration.
6. Click Applications. The list of supported Applications appears.
7. Drag and drop an Application to the root block anchor point.
8. To select the Operation and Account for the Application, click
The following table depicts the block interactions:
Icons
Applicable for...
Action/Description
Only for the Control Flow block and the Root block.
Comments for the Control Flow block and the Root block. Click the Show comments inline check box to view the comments entered for the blocks.
Applications, Services, Integrations, and the Root block
Define Input and Output Signature
Click the Define Input and Output Signature icon to define the input and output signature of an Integration. You can declare the input and output parameters for an Integration using the Input and Output tabs. Input and output parameters are the names and types of fields that the Integration requires as input and generates as output. These parameters are also collectively referred to as a signature. For example, an Integration can take two string values, an account number (AcctNum ) and a dollar amount (OrderTotal ) as inputs and produces an authorization code (AuthCode ) as the output. On the Output tab, specify the fields that you want the Integration to return.
You can use a Document Reference to define the input or output parameters for an Integration. If you have multiple Integrations with identical input parameters but different output parameters, you can use a Document Type to define the input parameters rather than manually specifying individual input fields for each Integration. When you assign a Document Type to the Input or Output side, you cannot add, modify, or delete the fields on that part of the tab.
You can select a Document Type from the Document Reference drop-down list. To create a Document Type, from the Integration Cloud navigation bar, select Projects > <Select a Project> > Document Types > Add New Document Type. See Document Types for more information.
You can create pipeline variables as document references, create document types comprising of document references, and also define the signature of Integrations comprising of document references.
You can also copy a field from the fields panel by clicking the icon. Depending on the context, you can either paste the field or the field path by clicking the icon. For example, if you copy a field and paste the field in the Set Value window in an Integration, (double-click a field to set a value), the field path will be pasted.
See Creating Document Types from Scratch for more information.
Note: You cannot modify or paste the child fields of a Document Reference.
Select the Validate input and Validate output options if you want to validate the input and output to the Integration, against the service input or output signature.
Select Business Data to Log
Integration Cloud allows you to log select business data from the Operation and Integration signatures either always, or only when errors occur. Values of logged fields can be viewed in the Only Business Data section in the Execution Results screen. You can also create aliases for the logged fields.
Note: User specific data which may be considered as personal data will be stored and retained till the retention period defined in Execution Results.
To select input or output fields for logging, click the Select Business Data to Log icon , and in the Select Business Data to Log dialog box, choose whether you want to log business data only when errors occur (On Failure) or choose (Always) to always log business data. The default setting is On Failure. Then expand the Input Fields and Output Fields trees to display the fields available in the signature, and select the check boxes next to the fields you want to log. If you want to define an alias for a field, type an alias name beside the field. The alias defaults to the name of the selected field, but it can be modified. Click the icon to clear the selections.
When selecting fields for logging, you can create the same alias for more than one field, but this is not recommended. Having the same alias might make monitoring the fields at run time difficult.
Map Input and Output
Map the input of the operation from the Pipeline and also map the output of the operation into the pipeline.
You can copy a field from the fields panel by clicking the icon. Depending on the context, you can either paste the field or the field path by clicking the icon. If you copy an array item, the path that is pasted includes the item index. For example, if the item that is copied is A/B/C[10], then the pasted path will also include the item index [10]. But if it is pasted in the document tree, it will appear as an array, like A[ ]. If there are multiple fields with the same name in a document, and one of the occurrences of such a field is copied, then the path when pasted will contain the occurrence number in brackets, for example, the path will be A/B/C(5) if the copied element C is the 5th occurrence under field B.
You can select a block, other than the root block, and click Duplicate to repeat a block, click Collapse to flatten a block, click Delete to remove a block from the workspace, or click Disable to disable a block and all blocks within that block. If you disable blocks, those blocks will not be considered for execution, test, or debug operations.
Control Flow > Transform Pipeline
Make pipeline modifications. Edit data mapping, add Transformer, clear all mappings, add, delete, edit, or discard a field, set a value for a field and perform pipeline variable substitutions.
Applications
Select an Account and an Operation for the Application.
Applications
The block is not configured. Select an Account and an Operation for the Application.
Services
The block is not configured. Select a service.
Orchestrated Integrations
Click to view or modify an Orchestrated Integration after it is moved to the workspace. The Orchestrated Integration will open up for editing in a new tab.
Orchestrated Integrations
An Orchestrated Integration has been modified or newly created but not saved.
9. Create the Integration using the available constructs by inserting the blocks, setting properties, declaring the input and output parameters, setting values, performing pipeline variable substitutions (if you want to replace the value of a pipeline field at run time), and mapping the pipeline data.
10. Click the icon and then select Map Input and Output to map the Pipeline Input to the Input Signature.
11. To view only the mapped fields, select the Show Only Mapped Fields option. Map the Output Signature to the Pipeline Output in the Pipeline Data window, and then click Finish.
Indexed Mapping
You can add an indexed item to a String List, Document List, Document Reference List, or Object List and also map the indexed item. You can delete the selected indexed item provided the indexed item or none of its child fields are mapped.
When you link to an array variable or from an array variable (String List, Document List, Document Reference List, or Object List), you can specify which element in the array you want to link to or from. Click on the Add Array Item icon to get an index value for the array item. Then map the indexed item to the target. For example, you can link the second element in a String List to a String or link the third Document in a Document List to a Document variable.
For example, suppose that a buyer’s address information is initially stored in a String List. However, the information might be easier to work with if it is stored in a Document. To map the information in the String List to a Document, click on the Add Array Item icon to get an index value for the String List. Then map each indexed item to the address fields. In the following pipeline, the elements in buyerAddress String List are mapped to the address Document.
Suppose a String List has length 3 and if you link index 4 of the String List, at run time, the String List length is increased from 3 to 5.
When you link a Document or Document List variable to another Document or Document List variable, the structure of the source variable determines the structure of the target variable.
Default Pipeline Rules for Linking to and from Array Variables
When you create links between scalar and array variables, you can specify which element of the array variable you want to link to or from. Scalar variables are those that hold a single value, such as String, Document, and Object. Array variables are those that hold multiple values, such as String List, Document List, and Object List. For example, you can link a String to the second element of a String List. If you do not specify which element in the array variable that you want to link to or from, default rules in the Pipeline view are used to determine the value of the target variable. The following table identifies the default pipeline rules for linking to and from array variables.
If you link…
To…
Then…
A scalar variable
An array variable that is empty (the variable does not have a defined length)
The link defines the length of the array variable; that is, it contains one element and has length of one. The first (and only) element in the array is assigned the value of the scalar variable.
If you link…
To…
Then…
A scalar variable
An array variable with a defined length
The length of the array is preserved and each element of the array is assigned the value of the scalar variable.
If you link…
To…
Then…
An array variable
A scalar variable
The scalar variable is assigned the first element in the array.
If you link…
To…
Then…
An array variable
An array variable that does not have a defined length
The link defines the length of the target array variable; that is, it will be the same length as the source array variable. The elements in the target array variable are assigned the values of the corresponding elements in the source array variable.
If you link…
To…
Then…
An array variable
An array variable that has a defined length
The length of the source array variable must equal the length of the target array variable. If the lengths do not match, the link will not occur. If the lengths are equal, the elements in the target array variable are assigned the values of the corresponding elements in the source array variable.
No link occurs.
A source variable that is the child of a Document List is treated like an array because there is one value of the source variable for each Document in the Document List. For example:
If you link…
To…
Where the value of DocumentList1 is...
Then the value of StringList1 is…
12. Use the Transform Pipeline block under the Control Flow category to adjust the pipeline at any point in the Integration and make pipeline modifications. Within this step, you can discard or remove an existing pipeline input field, (once you discard a field from the pipeline, it is no longer available subsequently), restore the discarded field, add a field, set a new value or modify the existing value of a selected field, map selected fields, remove the selected map between the fields, or perform value transformations by inserting transformers.
13. Click Save to save your Integration or click Save All to save all modified Integrations. The new Integration appears in the Integrations page. Click on the Integration link in the Integrations page to view the Integration details.
14. Integration Cloud allows you to test or debug an Integration after you have created it.
After saving an Integration, in the edit Integration page, click the icon to run and test the Integration execution in real time and view the execution results on the Test Results panel.
The Test Results panel displays up to 25 test entries and the most recent test entry is located at the top of the panel. Click the icon on the Test Results panel header and click Remove All to delete the test results permanently or click Close to close the test results panel.
Point to a test result entry, click the icon, and click Download Result to save the entry locally in JSON format. Click Remove Result to remove the selected entry. Click Pin Result if you want to prevent a previous result from getting deleted as more results fill the test results panel. Click Unpin Result to move the result to the Previous Results panel.
Debugging Orchestrated Integrations
You can debug an orchestrated Integration and can inspect the data flow during the debugging session. You can debug an orchestrated Integration only in the Development stage and after the Integration has been saved.
You can do the following in debug mode:
*Start an Integration in debug mode, specify the input values, and inspect the results.
*Examine and edit the pipeline data before and after executing the individual blocks.
*Monitor the execution path, execute the blocks one at a time, or specify breakpoints where you want to halt the execution.
To start the Debug mode, click the Debug Integration icon .
The following table describes the options available in the Debug panel:
Icons
Applicable for...
Action/Description
Inserting Breakpoints
A breakpoint is a point in an Integration where you want processing to halt when you debug that Integration. Breakpoints can help you isolate a section of code or examine data values at a particular point in the execution path. For example, you might want to set a pair of breakpoints before and after a particular block so that you can examine the pipeline before and after that block executes.
Breakpoints are recognized only when you execute an Integration in debug session. To insert a breakpoint, in debug mode, click the top-left corner of the block. To remove a breakpoint, click on the inserted breakpoint.
When you execute an Integration that contains a breakpoint, the Integration is executed up to, but not including the designated breakpoint. At this point, processing stops and the debug session suspends. To resume processing, select Resume. After you resume the debug session, the Integration flow stops at the next breakpoint.
Ignore All Breakpoints
Ignores all breakpoints inserted in the Integration blocks. You cannot insert breakpoints for variables in the Expressions category.
Stepover
Executes the current block. Integration Cloud suspends the debug session immediately before executing the next block in the Integration.
Resume
The debug session resumes but suspends at the next breakpoint.
Stop
Terminates the debug session.
A debug session may also stop by itself for the following reasons:
*The Integration that you are debugging executes to completion (error or success).
*You select Stepover for the last step in the Integration.
*You Exit the Integration.
Clear All Breakpoints
Removes all breakpoints inserted in the Integration.
Modifying the current pipeline data while debugging
During debugging, you can modify the contents of the pipeline. The changed values will be applied when you perform a Stepover or Resume.
While modifying the pipeline, keep the following points in mind:
*You can modify the pipeline data only during an active debug session.
*When you modify values in the pipeline, the changes apply only to the current debugging session. The Integration is not permanently changed.
*You can only modify existing variables. You cannot add new variables to the pipeline.
Versioning of Integrations
15. Integration Cloud allows you to view the version change history of an Integration.
Click the Show history option available on the tool bar panel to view the version change history. While editing the Integration, you can restore an earlier version of the Integration. To restore a previous version of an Integration, select the earlier version of the Integration, and then click the Restore option to restore that previous version of the Integration. If you have reverted to an earlier version and there is a scheduled execution for the Integration, the reverted version of the Integration will be run as per the defined schedule.
Note: If an Integration references any other Integration, then the input/output mapping of the referenced Integration will be restored to that particular version. But if the input/output mapping of the referenced Integration has been modified in a later version, the modifications will break the mappings and the Integration execution may not be successful.
Note: If you delete an Integration and then create another Integration with the same name, the version history of the deleted Integration will be available.
Point-to-Point Integrations
Reference Data
Creating Document Types from Scratch
Integration Details

Copyright © 2014- 2019 | Software AG, Darmstadt, Germany and/or Software AG USA, Inc., Reston, VA, USA, and/or its subsidiaries and/or its affiliates and/or their licensors.