Using the Prefetch API
Overview
For better user experience with Universal Messaging's synchronous messaging APIs, a new set of synchronous consumer prefetch APIs were added in v10.7. The old "non-prefetch" APIs had the following disadvantages:
The maximum number of events which can be read from the server was limited by the Window Size setting. The window size defines the maximum number of events that have been sent to consumer but have not yet been acknowledged by the consumer. The value of the window size was not transparent for the consumer and it could not be changed dynamically. This meant that if event acknowledgment was slower than synchronous consumption, synchronous consumers would throw an exception "Need to Commit or rollback" when the window size was reached.
Events were cached-client side. Often, events were delivered to consumers in batches, but only one event was returned by the synchronous APIs. This left some pending cached events client-side, and this situation was not transparent for the user.
The prefetch APIs were created to address the above issues:
The synchronous APIs can now receive prefetch events which they requested and will not be limited by a window size. The client application is now in control of how many pending/unacknowledged events it can hold.
The APIs now return a list of events which eliminates the caching client-side and makes event delivery transparent for the consumer.
The old deprecated APIs that did not use prefetch are limited in v10.7 to have a prefetch of '1'. This means they cannot receive events in batches as they did before.
Window size is still in use for asynchronous consumers where the consumer is not in control of how many events will be received, thus here the window size makes sense to throttle the number of events sent.
APIs added
The list of prefetch APIs added in v10.7 is as follows:
Java
nChannelIterator: public List<nConsumeEvent> getNextEvents (int prefetchSize)
public List<nConsumeEvent> getNextEvents (int prefetchSize, long timeout)
nQueueSyncReader: public List<nConsumeEvent> popEvents (int prefetchSize)
public List<nConsumeEvent> popEvents (int prefetchSize, long timeout)
public List<nConsumeEvent> popEvents (int prefetchSize, long timeout, String selector)
MessageConsumerImpl: public List<javax.jms.Message> receiveMessages (int prefetchSize)
public List<javax.jms.Message> receiveMessages (long timeOut, int prefetchSize)
C#
nChannelIterator: public List<nConsumeEvent> getNextEvents(int prefetchSize)
public List<nConsumeEvent> getNextEvents(int prefetchSize, long timeout)
nQueueSyncReader: public List<nConsumeEvent> popEvents(int prefetchSize)
public List<nConsumeEvent> popEvents(int prefetchSize, long timeout)
public List<nConsumeEvent> popEvents(int prefetchSize, long timeout, String selector)
C++
nChannelIterator: std::list<nConsumeEvent*>* getNextEvents(int prefetchSize)
std::list<nConsumeEvent*>* getNextEvents(int prefetchSize, long timeout)
nQueueSyncReader: std::list<nConsumeEvent*>* popEvents(int prefetchSize)
std::list<nConsumeEvent*>* popEvents(int prefetchSize, longlong timeout)
std::list<nConsumeEvent*>* popEvents(int prefetchSize, longlong timeout, std::string selector)
Deprecated APIs
The APIs deprecated in v10.7 are:
nChannel: public nChannelIterator createIterator(nDurable name, String selector, int windowSize)
public nChannelIterator createIterator(nDurable name, String selector, int windowSize, boolean autoAck)
nChannelIterator: public nConsumeEvent getNext()
public nConsumeEvent getNext(long timeout)
nQueueSyncReader: public final nConsumeEvent pop()
public final nConsumeEvent pop(final long timeout)
public nConsumeEvent pop(final long timeout, final String selector)
Horizontal Scalability (HS) notes
Within the HS use case it is possible that the returned set of events is bigger than the requested prefetch size. The size can be the size of a previous prefetch value.
The HS mechanism sends the requests to every server in the HS landscape, and delivers the fastest response to the client. Thus if a previous request was done with a bigger prefetch value, a subsequent receive/pop call can actually receive a set of events which was requested with the bigger prefetch value and thus will be returned at once to avoid caching client-side.
A prefetch call in HS can return only events from one server. Currently the returned event set will not mix events from different servers.
For example, if we have a 3-server HS landscape and an HS channel with 100 events on each server. An iterator calls getNext with a prefetch size of 5. We receive 3 responses with 5 events in each, and we deliver the first response to the client and the other 2 responses are kept waiting in the HS layer. If now a subsequent getNext is called with prefetch size of 2, we deliver the cached response which has 5 events.
Performance Notes
Using a bigger prefetch size can improve performance
Using a bigger prefetch size can improve performance as events will be delivered in batches (if there are events piled up on the server), especially for small events. There is no upper limit or the prefetch size, but there is a limit of 1MB for each batch. Performance-wise there is no benefit of using batches bigger than 1MB.
JMS API
The prefetch functionality is not part of the JMS spec, and therefore JMS clients cannot take advantage of the prefetch functionality right away. By default, the JMS API uses a prefetch size of 1 to avoid caching client-side. Nevertheless, to allow users to consume events in batches to improve performance, the system configuration property nirvana.syncPrefetchSize can be set client-side. This integer value property defines how many events the JMS synchronous consumer should request from the server (prefetch logic is the same). While the consumer will still get a single event from the receive call, the rest of the prefetch will be cached, so the client has the control over the cache, and also the same (or better) performance.
After 10.7 Fix 1 this prefetch size for JMS can be set per Connection Factory using the connection factory property JMS_my-channels_SyncPrefetchSize.
Using the prefetch APIs moves the responsibility for consumer event throttling to the client application; the server will no longer honor any window size. This needs to be taken into an account when upgrading synchronous consumer applications to 10.7+ .
Older clients, working with the 10.7+ server
While the new synchronous clients will not be using a window size, clients from before the introduction of the prefetch feature will still be doing so. The new server will honor the window size (and the 10.5 queue batching) for older clients. This was done to preserve the behavior of old client applications until they can be migrated.