Apama 10.15.0 | Connecting Apama Applications to External Components | Correlator-Integrated Support for the Java Message Service (JMS) | Using the Java Message Service (JMS) | Designing and implementing applications for correlator-integrated messaging for JMS | Performance considerations when using JMS
 
Performance considerations when using JMS
 
Performance logging when using JMS
Receiver performance when using JMS
Sender performance when using JMS
When designing an application that uses correlator-integrated messaging for JMS it may be relevant to consider the following topics that relate to performance issues.
There are no guarantees about maximum latency. Persistent JMS messages inevitably incur significant latency compared to unreliable messaging, and Apama's support for JMS is focused around throughput rather than latency. Messages can be held up unexpectedly by many factors such as: the JMS provider; by connection failures; by waiting a long time for the receive-side commit transaction; by the broker acknowledge() call taking a long time; or by waiting a long time for the correlator to do an in-memory copy of its state.
Multiple receivers on the same queue may improve performance. But consider that "For PTP, JMS does not specify the semantics of concurrent QueueReceivers for the same Queue; however JMS does not prohibit a provider from supporting this. Therefore, message delivery to multiple QueueReceivers will depend on the JMS provider's implementation. Applications that depend on delivery to multiple QueueReceivers are not portable".
*If performance is an issue, be sure to check the correlator log for WARN and ERROR messages, which may indicate your application or configuration has a connection problem that may be responsible for the performance problem.
*Ensure that the correlator is not running with DEBUG logging enabled or is logging all messages. Either of these will obviously cause a big performance hit. Apama recommends running the correlator at INFO log level; this avoids excessive logging, but still retains sufficient information that may be indispensable for tracking problems.
*In practice, most performance problems are caused by mapping, especially when XML is used. Whenever possible, Apama recommends avoiding the use of XML in JMS messages due to the considerable overhead that is always added by using such a complex message format. For example, use MapMessage or a TextMessage containing an Apama event string.
*If you are receiving several different event types, ensure that the conditional expressions used to select which mapping to execute are as simple as possible. In particular, there will be a significant performance improvement if JMS message properties are used to distinguish between different message types instead of XML content inside the message body itself because JMS message properties were designed in part for this purpose.
*Use the Correlator Status lines in the log file to check whether the bottleneck is the JMS runtime or in the EPL application itself. A full input queue ("iq=") is a strong indicator that the application may not be consuming messages fast enough from JMS.
*Consider enabling the logPerformanceBreakdown setting in JmsSenderSettings and JmsReceiverSettings to provide detailed low-level information about which aspects of sending and receiving are the most costly. This may indicate whether the main bottleneck, and hence the main optimization target, is in the message mapping or in the actual sending or receiving of messages. If mapping is not the main problem, it may be possible to achieve an improvement by customizing some of the advanced sender and receiver properties such as maxBatchSize and maxBatchIntervalMillis.
*Consider using maxExtraMappingThreads to perform the mapping of received JMS messages on one or more separate threads. This is especially useful when dealing with large or complex XML messages.
*Take careful measurements. The key to successful performance optimization is taking and accurately recording good measurements, along with the precise configuration changes that were made between each measurement. It is also a good idea to take multiple measurements over a period of at least several minutes (at least), and take account of the amount of variation or error in the measurements (by recording minimum, mean, and maximum or calculating the standard deviation). In this way it is possible to notice configuration changes that have made a real and significant impact on the performance, and distinguish them from random variation in the results. Note that many JMS providers are observed to behave badly and exhibit poor performance when overloaded (for example, when sending so fast that queues inside the broker fill up and things begin to block). For this reason, the best way to test maximum steady-state performance is usually to create a way for the process that sends messages to be notified by the receiving process about how far behind it is. For example, if the sender and receiver are both correlators, engine_connect can be used to create a fast channel from the receiver back to the sender, and the test system can be set to send Apama events to the sender channel every 0.5 seconds so it knows how many events have been received so far. This allows better performance testing with a bound on the maximum number of outstanding messages (sent but not yet received) to prevent the broker being overwhelmed.
*Be careful when measuring performance using a virtual machine rather than dedicated hardware. VMs often have quite different performance characteristics to physical hardware. Take particular care when using VMs running on a shared host, which may be impacted by spikes in the disk/memory/CPU/network of other unrelated VMs running on the same host that belong to different users.