webMethods Process Engine 10.5 | Tuning Process Engine Performance and Quality of Service | Setting Quality of Service for a Process
 
Setting Quality of Service for a Process
You can define quality of service settings when you design a process. These settings are made in Software AG Designer and are used to determine how a process executes at run time, enabling you to select a balance of performance, reliability, visibility, and control.
* To specify the quality of service settings for a process in Designer
1. On the Process Development perspective, open the process you want to work with.
2. Click the Properties view if it is not already visible.
3. On the Advanced tab, click Run Time.
4. Using the following tables, enter your selections for the quality of service settings.
Quality of Service Setting
Description
Optimize Locally
Execute adjacent steps on the same Integration Server without publishing transition documents. Enabled by default.
*Select Optimize Locally when you want to use a pipeline to pass data from step to step on the local server, and publish a process transition document only when there is a transition to a step running on another server, or if a process splits into more than one branch.
*Clear this check box to always publish a process transition document when transitioning to any step, no matter where it is located. No pipeline is used.
You can select Optimize Locally to decrease document message traffic and improve performance. However, if a step fails, the process can recover automatically only at the step that published the most recent process transition document, and that step might not be the step of failure. For example:
Suppose process step 1 runs on Server A and process steps 2, 3, and 4 run on Server B. When you are optimizing locally and step 3 fails, the most recently published process transition document was produced by step 1, because the Process Engine did not publish a process transition document for step 2 or 3. The process, therefore, recovers automatically at step 1 (that is, step 2 will be run again).
When you do not optimize locally, every step publishes a process transition document, so a process can automatically recover at the step it completed last. In this case, the process recovers at step 3.
The biggest risk of optimizing locally is duplication of work. For example, you might not want to risk duplicating work for processes that store, synchronize, or correlate data. For processes that do less critical work, the performance benefits might outweigh the risks.
When a referenced process is invoked and Optimize Locally is selected, the Process Engine will attempt to locally invoke a referenced process, with the following conditions applied:
*The referenced process exists on the same Process Engine node as the parent step.
*The referenced process has no subscription filter.
*If the Integration Server thread usage is below the threshold, communication is done with a direct service invocation on a new Integration Server thread. Otherwise, communication is through the publishing of a document that is handled on the appropriate model trigger.
Note: The above thread behavior applies to all types of child invocations, including static and dynamic reference processes, and static and dynamic callable processes.
Subscription filters are enforced at the trigger level. If there is a filter on a referenced process, the Process Engine will ignore the Optimize Locally setting and publish the transition document.
When Optimize Locally is selected and data is returned from the referenced process to the parent, the parent step must be running on the same Process Engine node as the referenced process for successful data transfer.
Express Pipeline
Send a reduced data set between the steps in a process. Enabled by default.
Using the express pipeline can significantly improve performance when pipelines are large (1 MB or more).
This setting applies to both the pipeline and transition document methods of transferring data and is independent of the Optimize Locally setting.
*Select Express Pipeline to specify that you want to pass a reduced (express) data set from step to step.
*Clear this check box to specify that the complete data set is passed from step to step.
Note: When a process is resubmitted, the complete data set is passed, regardless of this setting.
When you use the complete data set, the server passes all data from step to step, regardless of whether outputs are used by downstream steps.
With the Express Pipeline option enabled, the server reads the list of inputs in the process description file and passes a reduced data set that contains only those outputs explicitly specified in the process model version as inputs to following steps. All other data is discarded.
Do not use the Express Pipeline setting for:
*Processes that include steps with services that add values to the pipeline data; the server will not include the added values because they are not explicitly specified as inputs to downstream steps.
*Processes that contain a process-wide error handler step; the process-wide error handler step is not recognized as a downstream step and will therefore not receive the necessary input data.
Important: Designer never implements Express Pipeline for dynamic referenced process/call activity steps, regardless of the Express Pipeline option setting.
The purpose of Express Pipeline is to protect explicitly-defined pipe elements, such as step inputs and outputs, from being removed at run-time. Early in your development cycle, however, you may not yet have added any inputs/outputs to the steps in your model, and thus your model is incomplete and not thoroughly designed. In such cases, when you generate (build and upload) your model, then the Express Pipeline setting displayed on the WmPRT home page will not match the value of the Express Pipeline check box as set in Designer.
Important: When the design is not yet complete, that is you have not yet added any inputs/outputs to the steps, then Express Pipeline in the WmPRT home page will always be displayed as "No". However, during runtime Process Engine will always correctly use the Express Pipeline value as set in Designer.
Volatile Transition Documents
Send process transition information in volatile mode. Enabled by default. This setting applies to Universal Messaging and Broker (deprecated) transition documents and referenced process start documents, and also to JMS transition and referenced process start messages.
*Select Volatile Transition Documents to specify the following:
*For Subscription (Publishable Documents) protocol: Process transition documents and referenced process start documents are stored in memory.
*For JMS (Triggered Processes protocol): Process transition messages and referenced process start messages are delivered to the JMS deliveryMode interface as NON-PERSISTENT.
*Clear this check box to specify the following:
*For Subscription (Publishable Documents) protocol: Process transition documents and referenced process start documents are stored on the local hard disk drive.
*For JMS (Triggered Processes protocol): Process transition messages and referenced process start messages are delivered to the JMS deliveryMode interface as PERSISTENT.
Enabling Volatile Transition Documents can significantly improve performance when documents are large (2 MB or larger). However, if the Universal Messaging, Broker (deprecated) or JMS server fails while a step is running, the process cannot automatically recover and completion cannot be guaranteed, and the document or message will be lost. If you are logging step pipelines to the Process Audit Log database component, you can manually recover the process by resubmitting steps through webMethods Monitor.
About Message Provider Behavior:
When the message provider receives a process transition document or referenced process document, it places the document in the process trigger's client queue and also stores it either in memory or on disk.
When the trigger retrieves the document, if the document was stored in memory it is immediately acknowledged and deleted from the message provider. If the document was stored on disk on the message provider, the document is acknowledged and deleted by the message provider after the process has published the next process transition document or the process completes.
Volatile Tracking
Store process tracking information in memory only. Disabled by default.
Note: If the Process Engine is running in a clustered environment, volatile tracking cannot be used, and this setting is ignored.
*Select Volatile Tracking to specify that the Process Engine stores process status in memory.
*Clear this check box to specify that the Process Engine stores process status in the Process Engine database component.
The Process Engine stores process status while a step that requires it is running. Process status is comprised of content from:
*External documents and process transition documents
*Referenced process documents
*Process and step status
*Process iteration count and correlation IDs
*Step and process timeouts
Using volatile tracking can significantly improve performance. However, if you use volatile tracking and a server fails while running a step, process status will be lost.
If you are logging process step status to the Process Audit Log database component, the step iteration count will be inaccurate in webMethods Monitor, making it harder to address the negative effects of server failure and to determine how much work has been duplicated.
If you choose to store process status in the Process Engine database component, you must configure the database component. For instructions, see ”Configuring and Monitoring the Process Engine” in the PDF publication Administering webMethods Process Engine. For more information about the Process Engine database component, see Installing Software AG Products.
If you choose to store process status in the Process Engine database component, you must configure the database component. For instructions, see Configuring and Monitoring the Process Engine . For more information about the Process Engine database component, see Installing Software AG Products.
Minimum Logging Level
Sets the minimum audit logging threshold for this process at run time. Set to 5 - Process and all events, activities, and looped activities by default.
At generation time, the Minimum Logging Level is set in webMethods Monitor based on this value. On subsequent generations, if the Minimum Logging Level is increased in Designer, the level in Monitor is also increased. If the Minimum Logging Level in Designer is lowered in subsequent generations, the level in Monitor is not lowered. You must explicitly lower the audit logging level in Monitor.
If a user attempts to set a new process audit logging level in Monitor, the user will not be able to specify a logging level that is numerically lower than the value you specify here. For example, if you specify a level of 2-Errors Only here, the user will not be able to specify a logging level of 1-None in webMethods Monitor; the user’s choices are limited to audit logging levels 2, 3, 4, and 5.
If you are sending process data to webMethods Optimize to run historical data fit distributions in Process Simulation, you must set the minimum audit logging level to 5-Process and all events, activities, and looped activities. If you need only process-level logging and step errors to be sent to Optimize, then logging level 3 is sufficient.
*For more information about process audit logging, including descriptions of logging levels, see the PDF publication webMethods Monitor User’s Guide.
Select one of the following values from the drop-down list:
1-None
2-Errors only
3-Process only
4-Process and start events
5-Process and all events, activities, and looped activities
Note: Logging level 6 - Process and all events, activities, and looped activities was available in version 8.2 but has been removed in later versions and its functionality moved into logging level 5.

Copyright © 2007-2019 | Software AG, Darmstadt, Germany and/or Software AG USA, Inc., Reston, VA, USA, and/or its subsidiaries and/or its affiliates and/or their licensors.