Replaying Replicated Records

Replay processing is used to deliver replication data that has already been delivered or should have been delivered to the target application. Using replay processing, you can read the sequential (merged) PLOG of an Adabas database and, based on the parameters you specify, send related data to one or more Event Replicator Servers. The Replay Utility, ADARPL, is the mechanism through which Event Replicator for Adabas supports replay processing. For more information about this utility, read ADARPL Utility: PLOG Replication Replay.

Replay processing in Event Replicator for Adabas can be run in any of three modes: synchronized, unsynchronized, and replay-only. These modes are described in Understanding Replay Modes.

This document covers the following topics:

In addition, be sure to read about ADARPL prerequisites, described in ADARPL Prerequisites.


When Is Replay Necessary?

Some reasons why you might need to replay replicated records are:

  • The target application does not process the replicated data correctly or has some sort of failure.

  • A failure occurs with the message queue tool.

  • The Event Replicator replication pool fills up. This might occur if the message queue tool is down for a prolonged period.

  • The Adabas replication pool fills up. This might occur if the Event Replicator Server is down for a prolonged period.

  • Replication was turned off for a particular file, subscription, or destination for some reason.

  • You need to send the data to another destination.

In all of these cases, you can use Event Replicator for Adabas's replay processing to redeliver the replication data that was lost.

Understanding Replay Modes

When you invoke replay processing, you must select a replay mode. Replay processing in Event Replicator for Adabas can be run in any of three modes: synchronized, unsynchronized, and replay-only. All modes replay replicated data reconstructed from protection data in the PLOG. However, they vary in the following ways:

  • They vary in the steps that must be taken to initiate and run them.

  • They vary in how they handle new transactions from Adabas while replay processing is occurring.

This section covers the following topics:

Synchronized Mode

Synchronized mode is the recommended mode. During synchronized replay processing, the Event Replicator Server suspends new Adabas transactions. When the replay processing is complete, the new Adabas transactions are automatically synchronized with the replayed data. This mode is only available using the online Adabas Event Replicator Subsystem screens.

The net effect of synchronized mode replay processing is that the target application receives replicated data reconstructed from the PLOG data sets before it receives any new replicated data produced by Adabas. The data is then processed in the chronologically correct sequence.

When running a synchronized replay:

  • The Event Replicator Server will reactivate all files, subscriptions, and destinations involved in the replay that are inactive.

    Note:
    Files that are active in the Event Replicator Server but inactive in the source Adabas nucleus will not be reactivated. If you would like a file in this state to be reactivated during the synchronized replay, set the file to inactive status in the Event Replicator Server before staring the synchronized replay.

  • All new Adabas data for the subscriptions and destinations involved in the replay is held in the Event Replicator replication pool until the replay processing is completed.

    If an SLOG has been defined, all new data is written to the SLOG instead. The advantage of using an SLOG is that replay processing makes less use of the replication pool, thus reducing the risk of a replication pool overflow.

  • When replay processing is complete, the new data held in the replication pool is delogged and processed as usual.

    If an SLOG was used, the Event Replicator Server reads the held transactions from the SLOG, processes them as usual, and deletes them. If additional new transactions are received while this delogging process is occurring, they are also written to the SLOG until the delogging process has caught up with the logging process.

  • If synchronized replay processing fails, the Event Replicator Server will deactivate the files, subscriptions, and destinations involved in the replay that it originally activated.

  • If an SLOG has not been defined and synchronized replay processing takes so long that the new replication data from Adabas fills up the replication pool, the Event Replicator Server will discard the new data and automatically change the replay processing to replay-only mode.

  • While replication data is stored in the SLOG file, the Event Replicator Server will not shut down normally (using the ADAEND command). It can be brought down using a HALT command and it can be canceled or otherwise terminated abnormally. If during the next session, the Event Replicator Server detects data on the SLOG originating from a replay process that took place in the previous session, it deletes this leftover data from the SLOG.

When synchronized replay processing is initiated, a token is assigned to the replay process and can be referenced using the ADARPL batch utility. For information on running the ADARPL utility, read ADARPL Utility: PLOG Replication Replay .

Unsynchronized Mode

During unsynchronized replay processing, the new Adabas transactions are processed concurrently with the replayed transactions, but no synchronization is performed. This mode is only available through batch runs of the ADARPL utility. For information on running the ADARPL utility, read ADARPL Utility: PLOG Replication Replay .

The net effect of unsynchronized mode replay processing is that the target application receives replicated data reconstructed from the PLOG data sets at the same time and interleaved with any new replicated data produced by Adabas. The data is not processed in the chronologically correct sequence.

When running an unsynchronized replay:

  • The Event Replicator Server requires that all files, subscriptions, and destinations involved in the replay be active. It will not perform any automatic activation of these resources.

  • All new Adabas data for the subscriptions and destinations involved in the replay are processed as soon as they are received.

When unsynchronized replay processing is initiated, a token is assigned to the replay process. This token can be used to cancel the replay process, if necessary.

Replay-Only Mode

During replay-only processing, replay processing is performed on the replicated transactions in the PLOG, but any new Adabas transactions for the files, subscriptions, and destinations involved in the replay are discarded. This mode is only available using the online Adabas Event Replicator Subsystem screens.

The net effect of replay-only mode replay processing is that the target application receives only replicated data reconstructed from the PLOG data sets. When replay processing is complete, another replay process should be initiated to pick up any new Adabas transactions discarded for the files, subscriptions, and destinations involved in the replay.

When running a replay-only mode replay:

  • The Event Replicator Server requires that some or all of the files and subscriptions involved in the replay must be inactive before replay processing starts so that no replication data from Adabas can be processed using these resources.

  • A replay-only mode replay processing will be disallowed if one or more destinations involved are closed.

  • When Event Replicator Server starts replay-only mode replay processing, it activates the necessary inactive files, subscriptions, and destinations so that data from the PLOGs only can use them, but blocks and discards all the new data from Adabas for those files, subscriptions, and destinations.

  • When processing is complete, the Event Replicator Server deactivates the files, subscriptions, and destinations that were inactive when replay-only mode processing was initiated.

    Note:
    Files that are active in the Event Replicator Server but inactive in the source Adabas nucleus are not considered inactive in this context. If you would like a file in this state to be activated during the replay-only replay, set the file to inactive status in the Event Replicator Server before starting the replay-only replay.

When replay-only mode replay processing is initiated, a token is assigned to the replay process and can be referenced using the ADARPL batch utility. For information on running the ADARPL utility, read ADARPL Utility: PLOG Replication Replay .

Prerequisites

Before you can initiate replay processing using the Adabas Event Replicator Subsystem, the following prerequisites must be met:

  • Verify that the correct PLOG is used for the run and that it is a sequential PLOG, not a dual PLOG. You can use the PLOG data set list to help determine which PLOG data sets should be used. For more information, read Reviewing and Managing the PLOG Data Set List.

  • Verify that the target application can handle duplicate records.

  • The Adabas database must be active. The Replay Utility will attempt to issue a call to Adabas to obtain the GCB, FCBs, and FDTs from the nucleus.

  • Verify that all ADARPL utility prerequisites are satisfied. For more information, read ADARPL Prerequisites.

Identifying Replay Processing Resources

Prior to initiating a replay process, we recommend that you identify resources involved in the replay process. When you initiate a replay request, specific resources are requested. Data from the PLOG is only processed by the resources involved. If multiple resources of different types (subscriptions, destinations, or files) are requested, data is only replayed for the resources common to all requested resources. This section explains this more fully.

To identify the replay resources actually used by the replay process, you must examine the data flow paths through the Event Replicator Server that are initiated by each resource requested for the replay. Each data flow path is defined as a unique one file-one subscription-one destination combination, such that the subscription takes data from the file and delivers it to the destination.

This examination process is best described through a series of examples, using the following resource definitions (where Sx denotes a subscription name, Fx denotes a file number, and Dx denotes a destination name):

  1. S1: F1, F2, D1, D2

    Subscription S1 includes processing information (SFILE definitions) for files F1 and F2 to destinations D1 and D2.

  2. S2: F2, F3, D2, D3.

    Subscription S2 includes processing information (SFILE definitions) for files F2 and F3 to destinations D2 and D3.

Eight unique data flow paths are identified by these definitions:

  • F1, S1, D1

  • F1, S1, D2

  • F2, S1, D1

  • F2, S1, D2

  • F2, S2, D2

  • F2, S2, D3

  • F3, S2, D2

  • F3, S2, D3

The remainder of this section uses these example definitions to describe the data flow paths and ultimate effect on replay processing in four different replay scenarios:

Replaying Only One Resource

If you only specify one resource in the replay request, the effect of the replay processing is determined by the union of the constituents of all data flow paths going through the one resource.

For example, based on the example definitions described earlier in this section, suppose you specify D2 as the resource for the replay request. In this case, the resources involved in the replay are F1, F2, F3, S1, S2, and D2. The data flow paths are:

  • F1, S1, D2

  • F2, S1, D2

  • F2, S2, D2

  • F3, S2, D2

Any transactions flowing through these data paths will be replayed.

Replaying Multiple Resources of One Type

If you specify multiple resources of one type (destination, subscription, or file) in the replay request, the effect of the replay processing is determined by the union of the constituents of all data flow paths going through any specified resource.

For example, based on the example definitions described earlier in this section, suppose you specify D1 and D3 as the resources for the replay request. In this case, the resources involved in the replay are F1, F2, F3, S1, S2, D1, and D3. The data flow paths are:

  • F1, S1, D1

  • F2, S1, D1

  • F2, S2, D3

  • F3, S2, D3

Any transactions flowing through these data paths will be replayed.

Replaying Multiple Resources of Different Types

If you specify multiple resources of different types (destination, subscription, or file) in the replay request, the effect of the replay processing is determined by the union of the constituents of all data flow paths that are common to the data flow paths grouped by type.

For example, based on the example definitions described earlier in this section, suppose you specify S1 and D2 as the resources for the replay request. In this case, the resources involved in the replay are F1, F2, S1, and D2. But the data flow paths used for the replay must be the data flow paths common to both the S1 subscription and the D2 destination.

The data flow paths for the S1 subscription are:

  • F1, S1, D1

  • F1, S1, D2

  • F2, S1, D1

  • F2, S1, D2

The data flow paths for the D2 destination are:

  • F1, S1, D2

  • F2, S1, D2

  • F2, S2, D2

  • F3, S2, D2

However, the only data flow paths the S1 subscription and the D2 destination share are:

  • F1, S1, D2

  • F2, S1, D2

Any transactions flowing through these two data paths will be replayed.

Replaying Resources With Nothing in Common

It is an error to request a replay resource that has no data flow paths in common with other requested resources. When this happens, the entire replay request will be rejected.

For example, based on the example definitions described earlier in this section, suppose you specify S1, D2, and D3 as the resources for the replay request. In this case, the resources involved in the replay should be F1, F2, S1, D2, and D3. But, as you will see below, the D3 data flow paths have nothing in common with the data flow paths for S1 and D2:

The data flow paths for the S1 subscription are:

  • F1, S1, D1

  • F1, S1, D2

  • F2, S1, D1

  • F2, S1, D2

The data flow paths for the D2 destination are:

  • F1, S1, D2

  • F2, S1, D2

  • F2, S2, D2

  • F3, S2, D2

The data flow paths for the D3 destination are:

  • F2, S2, D3

  • F3, S2, D3

Since the D3 data flow paths are only for subscription S2 and the S1 data flow paths do not include D3, there is no common data flow path for this replay request and the replay request is in error.

Initiating Replay Processing

Replay processing can be initiated in any of the following ways.

  1. It can be initiated in a batch job using the ADARPL utility without specifying a replay process token. In this case, an unsynchronized replay is initiated. For complete information on initiating replay processing using the ADARPL utility without a replay process token, read ADARPL Utility: PLOG Replication Replay , using the syntax described in Syntax for Initiating ADARPL Without A Token.

  2. It can be initiated using a replay process token produced in the Adabas Event Replicator Subsystem. This method involves a combination of the Adabas Event Replicator Subsystem and a batch ADARPL utility job. In this case, you first use the Adabas Event Replicator Subsystem to generate a synchronized or replay-only replay request. The replay request is assigned a token that you then use in the batch ADARPL utility job. For information on initiating synchronized and replay-only replay requests using the Adabas Event Replicator Subsystem and ADARPL, read Initiating a Replay Request Using the Adabas Event Replicator Subsystem.

  3. It can be initiated using a replay process token produced by a standalone application programming interface (API) provided by Software AG. In this case, you first use the API to generate a synchronized or replay-only replay request. For complete information on initiating replay processing using the standalone API, read Initiating a Replay Request Using the Standalone API.

  4. It can automatically be initiated by the Event Replicator Server whenever a replay process token is produced by the Adabas Event Replicator Subsystem or by a standalone API provided by Software AG. In this case, specific JCL code must be incorporated into the Event Replicator Server startup JCL and, if running, the Event Replicator Server must be stopped and restarted. Then you must generate a synchronized or replay-only replay request using the Adabas Event Replicator Subsystem or the Software AG-supplied standalone API. For complete information on automating replay processing, read Automating Replay Processing.

The remainder of this section describes the steps for initiating a replay request using the standalone API and lists what's returned from a run of the standalone API:

Initiating a Replay Request Using the Standalone API

Start of instruction setTo generate a synchronized or replay-only replay request using the standalone API, complete the following steps:

  1. In the Natural library provided by Software AG, edit the library object ARFP003. This library object is a sample program you can use to call the Event Replicator for Adabas program ARFN003.

    If you use your own program instead of ARFP003, included the SYSRPTR library (the INPL library) in the library concatenation for your program job when calling the ARFN003 program. The SYSRPTR library provides a routine that is invoked by ARFN003. You can use the Natural subprogram, USR1025N, to do this.

  2. Search for the comment "Set values below" in ARFP003. A sample of this section of the program is provided below and the "Set values below" section is highlighted in green.

    1660 *                                                                         
    1670 *  Set values below.                                                      
    1680 *                                                                         
    1690 #MODE       = 'S'                                                         
    1700 #DBID       = H'0001'                                                     
    1710 #REPTOR-ID  = H'0002'                                                     
    1720 #FROM-DATE  = '2007/11/01'                                                
    1730 #FROM-TIME  = '01:02:03'                                                  
    1740 #TO-DATE    = '2007/11/01'                                                
    1750 #TO-TIME    = '21:22:23'                                                  
    1760 #START-DATE = '          '                                                
    1770 #START-TIME = '        '                                                  
    1780 #DEST-LIST  = 'NULL01'                                                    
    1790 #SUB-LIST   = ' '                                                         
    1800 #AUTOMATED  = 'N'                                                         
    1810 #TIMEOUT    =  900                                                        
    1820 *                                                                         
    1830 CALLNAT 'ARFN003'                                                         
    1840          #MODE                                                            
    1850          #DBID                                                            
    1860          #REPTOR-ID                                                       
    1870          #FROM-DATE                                                       
    1880          #FROM-TIME                                                       
    1890          #TO-DATE                                                         
    1900          #TO-TIME                                                         
    1910          #START-DATE                                                      
    1920          #START-TIME                                                      
    1930          #DEST-LIST                                                       
    1940          #SUB-LIST                                                        
    1950          #TOKEN                                                           
    1960          #RESPONSE                                                        
    1970          #SUBCODE                                                         
    1980          #AUTOMATED                                                       
    1990          #TIMEOUT                                                         
    2000          #MESSAGE                                                         
    2010 *                                                                         
    2020 WRITE '----- COMPLETED ------'                                            
    2030 WRITE 'RESPONSE:' #RESPONSE                                               
    2040 WRITE 'SUBCODE: ' #SUBCODE                                                
    2050 WRITE 'TOKEN:   ' #TOKEN                                                  
    2060 WRITE 'MSG:     ' #MESSAGE                                                
    2070 WRITE '------ E N D ---------'                                            
    2080 *                                                                         
    2090 END                                                                       
    ***** End of list *****                                                        
    
  3. Supply values for the variables to ARFP003 listed below the comment. Descriptions of all variables are provided in the table below as well as in the ARFP003 program itself.

    Warning:
    Do not modify the order of the variables as listed in the CALLNAT 'ARFN003' section and below of the program. If you do, the API will either fail or your results will not be valid.
    Variable Name Description
    #AUTOMATED

    Indicate whether or not you want the replay automated or not. Valid values are "Y" (perform an automated replay) or "N" (do not perform an automated replay).

    An automated replay will automatically perform steps 7 through 9 of this procedure. A non-automated replay will not perform these steps automatically, and you will need to perform them manually. For complete information about automating replay processing, read Automating Replay Processing.

    Note:
    If the RECORDPLOGINFO parameter has been set to NO, you cannot run an automated replay.

    #DBID The database ID of the Adabas database from which you want replicated transactions replayed.
    #DEST-LIST

    A list of destinations for which the replay request should be initiated. When the replay request is initiated, transactions will be replayed that were originally destined for the destinations on this list.

    Up to 60 eight-byte entries can be specified in the list.

    #MODE The replay mode to be used. Valid values are "S" (synchronized) or "R" (replay only). For complete information about the differences between replay modes, read Understanding Replay Modes.
    #FROM-DATE
    #FROM-TIME
    The date and time from which replicated transactions should be replayed. Dates should be specified in YYYY/MM/DD format; times should be specified in HH:MM:SS format. Replay processing will start with transactions in the PLOG that ended at or after this date and time. From dates and times must be earlier than the current date and time and earlier than the specified end date and time.
    #REPTOR-ID The database ID of the Event Replicator Server to which the replayed transactions will be sent. This is also the server with the Replicator system file that stores the destination and subscription definitions requested for replay processing.
    #START-DATE
    #START-TIME

    The date and time of the PLOG entries that should be used as a starting point for the replay processing. This date and time are used to identify the PLOG with which to start replay processing.

    Dates should be specified in YYYY/MM/DD format; times should be specified in HH:MM:SS format. Replay processing will search the PLOG with this start date and time first for records that match the other replay processing criteria listed on this screen.

    A start date and time must be specified if an automated replay is requested.

    #SUB-LIST

    A list of subscriptions for which the replay request should be initiated. When the replay request is initiated, transactions will be replayed that were originally initiated by the subscriptions on this list.

    Up to 60 eight-byte entries can be specified in the list.

    #TIMEOUT Optionally, specify the length of time, in seconds, at which the replay request should time out.
    #TO-DATE
    #TO-TIME

    The date and time to which replicated transactions should be replayed. Dates should be specified in YYYY/MM/DD format; times should be specified in HH:MM:SS format. Replay processing will stop with transactions in the PLOG that ended before this date and time. End dates and times must be later than the specified start date and time.

    If no end date and time are specified, the end time is the current time (the time the replay request is issued).

  4. When all variables have been supplied to your satisfaction, save ARFP003

  5. In an application you have created, add a call to ARFP003 and save your application.

  6. Run your application.

    The replay request is generated and a replay token is assigned to it. This replay token is displayed in an API message and in the Event Replicator Server job log.

    Make note of this token number as it is used in step 9 if you are initiating replication replay using a batch ADARPL job.

    If you have automated replication replay processing, this token number is picked up automatically by the generated replay jobstream and you can skip the remaining steps in this procedure. For complete information about automating replay processing, read Automating Replay Processing.

    For complete information about what's returned from this run, read What's Returned from a Standalone API Run.

  7. This step should be performed only if the #AUTOMATED variable is set to "N" (an automated replay is not requested).

    If necessary, issue a force-end-of-PLOG request to the Adabas database and wait until the resulting PLCOPY job has copied or merged the latest PLOG data set. This is necessary only when the PLOG for the selected replay end date and time has not yet been copied or merged, for example, if no end date and time were specified in the replay request.

  8. This step should be performed only if the #AUTOMATED variable is set to "N" (an automated replay is not requested).

    Identify the sequential PLOG data sets that contain the protection data for the replicated records you need replayed. The PLOG data sets must build a complete sequence from the PLOG that includes the replay processing start time to the latest PLOG you copied or merged in the previous step.

  9. This step should be performed only if the #AUTOMATED variable is set to "N" (an automated replay is not requested).

    Run an ADARPL utility job, using the syntax described in Syntax for Initiating ADARPL With A Token. Be sure to specify:

    • A concatenated list of the PLOG data sets you identified in the previous step.

    • The replay request token assigned in step 6. This token should be specified in the ADARPL TOKEN parameter.

    • The Event Replicator Server ID of the Event Replicator Server to which the replayed transactions should be sent. This token should be specified in the ADARPL RPLTARGETID parameter.

    For more information about using the ADARPL Utility, in general, read ADARPL Utility: PLOG Replication Replay.

What's Returned from a Standalone API Run

The following parameters may be returned by the standalone API:

Parameter Name Description
#MESSAGE A message associated with the response code or subcode.
#RESPONSE The response code issued from an attempt to initiate the replay.
#SUBCODE The subcode associated with the response code (#RESPONSE).
#TOKEN The token number assigned the initiated replay.

Cancelling Replay Processing

You can cancel a replay process if you decide that it is not producing the desired results. However, you will then have to determine how to get the replicated data back in sync with the source database.

Start of instruction setTo cancel replay processing:

  • Issue the RPLCLEANUP command. This command will stop replay processing (if it is running when the RPLCLEANUP command is entered) and will clean up any open transactions in the Event Replicator Server that are associated with replay processing. For more information, read RPLCLEANUP Command.

Automating Replay Processing

Automated ADARPL processing requires that you specify two additional JCL statements in the Event Replicator Server nucleus startup JCL: DDJCLIN and DDJCLOUT. This section describes the steps you need to perform to set up automated ADARPL processing and describes the automated replay JCL skeleton and provides some sample JCL.

Initiating Automated ADARPL Processing

Start of instruction setTo initiate automated ADARPL processing:

  1. Create an appropriate automated replay JCL skeleton. This skeleton can be coded directly in the Event Replicator Server startup JCL or in a sequential data set and will be tailored by the Event Replicator Server during automated replay processing. The sample JCL given elsewhere in this section provides an example of coding the automated replay JCL skeleton directly in the Event Replicator Server startup job. For more information about the skeleton itself, read The Automated ADARPL Skeleton.

  2. Add a DDJCLIN JCL statement to the Event Replicator Server nucleus startup JCL. This JCL statement identifies the sequential data set containing the automated replay JCL skeleton or specifies the skeleton itself. The sample JCL given elsewhere in this section provides an example of coding the automated replay JCL skeleton directly in the Event Replicator Server startup job.

  3. Add a DDJCLOUT JCL statement to the Event Replicator Server nucleus startup JCL. This JCL statement identifies the location of the generated jobstream for automated replay processing. As the Event Replicator Server tailors the automated replay JCL skeleton, it writes the generated jobstream to the file identified by the DDJCLOUT JCL statement in 80-byte records. The file is closed once the skeleton has been completely processed.

    The DDJCLOUT JCL statement may specify a sequential data set or, in z/OS systems, it may direct the output to the internal reader for immediate job processing. The z/OS internal reader is requested by coding SYSOUT=(*,INTRDR) on the DDJCLOUT JCL statement.

  4. If the Event Replicator Server is running, stop and restart it to pick up the new JCL specifications.

  5. Generate replay process tokens in any of the following ways:

    • Generate a replay process token using the Adabas Event Replicator Subsystem. Using the Adabas Event Replicator Subsystem, generate a synchronized or replay-only replay request. For information on initiating synchronized and replay-only replay requests using the Adabas Event Replicator Subsystem and ADARPL, read Initiating a Replay Request Using the Adabas Event Replicator Subsystem.

    • Generate a replay process token using a standalone application programming interface (API) provided by Software AG. Using the API, generate a synchronized or replay-only replay request. For complete information on initiating replay processing using the standalone API, read Initiating a Replay Request Using the Standalone API.

    Note:
    In all replay requests, be sure to turn automation on, by specifying "Y" for the Automated field in the Adabas Event Replicator Subsystem or by specifying "Y" for the #AUTOMATED variable in the Software AG-supplied API.

    Once a replay request is generated, it is assigned a token that will automatically be detected by the Event Replicator Server and used for automated replay processing.

The Automated ADARPL Skeleton

The automated replay JCL skeleton can be coded directly in the Event Replicator Server startup JCL or in a sequential data set. It is a jobstream containing 80-byte records that include platform-dependent JCL and utility control statements. Designated points in the jobstream will be automatically tailored by the Event Replicator Server when an ADARPL token is encountered and if automated replay is specified in the associated replay requests. Only columns 1 to 72 of the skeleton jobstream are examined for tailoring.

The JCL skeleton should be similar to the JCL below. Tailoring will occur at the points designated by keywords beginning with "%":

//ADARPL  JOB
//*
//*  ADARPL: Sample JCL skeleton for automated ADARPL generation
//*
//RPL    EXEC  PGM=ADARUN
//STEPLIB  DD  DISP=SHR,DSN=ADABAS.Vvrs.LOAD       <=== Adabas load lib 
//DDASSOR1 DD  DISP=SHR,DSN=EXAMPLE.DB%DBID.ASSOR1 <=== Adabas ASSO
//DDDRUCK  DD  SYSOUT=*
//DDPRINT  DD  SYSOUT=*
//SYSUDUMP DD  SYSOUT=*
//*  The following record will be replaced with a concatenation of
//*  sequential PLOG data sets 
%SEQUENTIAL
//DDCARD   DD  *
ADARUN PROG=ADARPL,DBID=%DBID,SVC=svc,DEVICE=3390
/*
//DDKARTE  DD  *
ADARPL REPLAY
ADARPL LRPL=1500K
*    The following record will be replaced with ADARPL control 
*    statements
%KARTE
/*

The following keywords in this skeleton will be tailored:

Keyword Tailoring Description
%DBID This keyword may appear in any position on any record in the ADARPL skeleton. It identifies locations in the jobstream that should be replaced with the five-byte numeric DBID specified in the replay request identified by the ADARPL token. The DBID is padded with leading zeros.
%SEQUENTIAL This keyword must appear in column 1 and be the only text in a record in the ADARPL skeleton. It identifies the location in the jobstream where platform-dependent PLOG JCL statements will be generated by the Event Replicator Server.
%KARTE This keyword must appear in column 1 and be the only text in a record in the ADARPL skeleton. It identifies the location where all other ADARPL utility parameters will be generated by the Event Replicator Server.

The %KARTE keyword does not generated any platform-dependent JCL, only additional ADARPL control statements (typically the RPLTARGETID and TOKEN parameters). It must be preceded by JCL that identifies the file and an initial ADARPL statement that invokes ADARPL utility processing and , optionally, provide values for the NU and LRPL parameters. More than one ADARPL statement can precede and follow the %KARTE keyword. For complete information about the ADARPL syntax, read Syntax for Initiating ADARPL With A Token (Synchronized and Replay-only Replay Modes) .

Sample Automated Replay JCL with Skeleton Statements

The following z/OS sample shows the additions you need to make to the Event Replicator Server startup JCL to support automated replay processing. The automated replay skeleton JCL is read in as specified by the DDJCLIN statement, tailored, and then written out as specified by the DDJCLOUT statement. In this example, the skeleton is included directly in the startup JCL and is delimited by the " ## " characters. In addition, the tailored output is directed to the z/OS internal reader for immediate job processing, as directed by the DDJCLOUT statement.

//DDJCLOUT DD  SYSOUT=(*,INTRDR)                   <=== Job output 
//DDJCLIN  DD  DATA,DLM=’##’                       <=== JCL skeleton
//ADARPL  JOB
//*
//*  ADARPL: Sample JCL skeleton for automated ADARPL generation
//*
//RPL    EXEC  PGM=ADARUN
//STEPLIB  DD  DISP=SHR,DSN=ADABAS.Vvrs.LOAD       <=== Adabas load lib 
//DDASSOR1 DD  DISP=SHR,DSN=EXAMPLE.DB%DBID.ASSOR1 <=== Adabas ASSO
//DDDRUCK  DD  SYSOUT=*
//DDPRINT  DD  SYSOUT=*
//SYSUDUMP DD  SYSOUT=*
//*  The following record will be replaced with a concatenation of
//*  sequential PLOG data sets 
%SEQUENTIAL
//DDCARD   DD  *
ADARUN PROG=ADARPL,DBID=%DBID,SVC=svc,DEVICE=3390
/*
//DDKARTE  DD  *
ADARPL REPLAY
ADARPL LRPL=1500K
*    The following record will be replaced with ADARPL control 
*    statements
%KARTE
/*
// 
##                                                 <=== End of DDJCLIN
//*