Apama 10.3.1 | Apama Capital Markets Foundation Documentation | Capital Markets Foundation | Market Data Management | Synthetic datasources | Market data aggregator
 
Market data aggregator
The market data aggregator presents data from multiple sessions as a single composite session. Similar to any other datasource, applications can access this synthetic datasource through the market data manager and client APIs. The aggregator also provides, where available, Extra Params (EP) updates for connected datastreams. The aggregator supports only compound delta updates.
The aggregator supports publication of:
*com.apama.md.BBA (best bid and ask, or top-of-book)
*com.apama.md.X (aggregated books, or synthetic cross rate books)
The aggregator supports underlying data types of:
*com.apama.md.O (orderbooks or Market-by-Order (MBO))
*com.apama.md.D (depth or Market-by-Price (MBP))
*com.apama.md.QB (Quotebooks)
*com.apama.md.X (aggregated books, or synthetic cross rate books)
Creating an aggregator instance
The following code excerpt shows how to create an instance of an aggregator using two underlying sessions (1 and 2):
action createAggregator() {

// Get the SessionInfo for the two underlying Sessions we want to use
// In this case, SessionId 1 and 2
sequence<com.apama.session.SessionInfo> sources := new
sequence<com.apama.session.SessionInfo>;
sources.append( sessionManagerIface.getSessionInfo( 1 );
sources.append( sessionManagerIface.getSessionInfo( 2 );

// Now create the Aggregator
com.apama.md.agg.Aggregator aggregator := new
com.apama.md.agg.Aggregator;
aggregator.create( mainContext, "MyAggregator", sources,
new sequence<com.apama.session.SessionConfigParams>,
aggCreationSuccess, aggCreationFailure );

}
Connecting to an aggregator
An application connects to an aggregator in the same way it would to any other MDA datasource. The following code excerpt demonstrates how to connect to an aggregator for the EUR/USD symbol.

// This code excerpt assumes that the Aggregator and a MDManagerInterface
// have already been created

aggManager := mgr.createAggregatedBookManager();
com.apama.session.CtrlParams controlParams := new com.apama.session.CtrlParams;
aggManager.connect( "EUR/USD", controlParams,
onSessionError, onConnectionSuccess, onConnectionFailure );
After successful connection, the AggregatedBookManagerInterface can be used in the normal way to examine the aggregated book data that is being published by the Aggregator.
The Advanced market data sample project demonstrates a simple use case where one aggregator combines data from up to three different underlying sessions.
Aggregator control parameters
The aggregator supports the control parameters described in the following table.
Name
Type
Default Value
ENABLE_TIMESTAMPS
stringified boolean
Disabled
If true, enables generation of extra timestamps on each published data event: one for the time at which the underlying datastream event entered the aggregator, and other at the point of publication from the aggregator. These are useful to calculate the performance of the aggregator within an application.
MIN_SOURCES
stringified integer
Waits for all connections
Defines the minimum number of underlying datastreams that must be connected before the aggregator starts publishing an aggregated book. By default, the aggregator requires all underlying datasources to be connected before it starts publication. Setting MIN_VALUES to a value of 1 causes the aggregator to publish data as soon as it receives a single connection, rather than waiting for all connections to be made.
QUEUE_INITIAL_DATA
stringified boolean
Disabled
If true, causes all events that the aggregator has calculated, but not published (due to the number of current connections not matching that defined in the MIN_SOURCES configuration parameter), to be queued. This option is used in conjunction with the MIN_SOURCES parameter. This option allows you to queue all events that the aggregator has calculated but not published. As soon as the correct number of connections complete, all publication events that have been queued will be published. This option is useful to show how the aggregated book has been built up from the underlying datastream events, rather than generating the final snapshot from all the updates that the aggregator has received.
CLEAR_STALE_DATA
stringified boolean
Disabled
If true, will cause the aggregator to permanently delete any current data from a source that has gone stale (has reached its timeout) rather than temporarily remove it from the aggregated book and cache it.
DATA_TIMEOUT
stringified float
Disabled
Specifies a data timeout period in seconds, after which the datasource connection will be temporarily removed from the aggregated book. If no new updates have been received by the aggregator within the timeout period, the data is likely to be out-of-date. Once a new update has been received, the datasource connection will be re-added to the aggregated book.
UNDERLYING_DATA_TIMEOUT
stringified dictionary<integer,integer>
Disabled
Similar to DATA_TIMEOUT, but this parameter allows setting the timeout on each underlying source separately. The format is a stringified dictionary in which the key is the integer id of the underlying source and the value is the integer timeout for that source. The default is an empty dictionary. If an id for an underlying source is not found, the setting of DATA_TIMEOUT is used.

Copyright © 2013-2019 | Software AG, Darmstadt, Germany and/or Software AG USA, Inc., Reston, VA, USA, and/or its subsidiaries and/or its affiliates and/or their licensors.