Apama 10.15.0 | Other Resolved Issues | Release 10.1.0 | EPL Plug-ins
 
EPL Plug-ins
*PAM-28374
Profiling showing zero for getTotal output for plug-ins.
The profiling option getTotal no longer incorrectly returns zero for all columns other than cumulative time and CPU time.
*PAM-28159
HTTP server and client fail to parse unknown content-type parameter.
A bug has been fixed where the HTTP server and transport would reject messages which had a Content-Type with a non-standard parameter which also did not have a charset parameter. These messages are now accepted and the non-standard parameters are ignored.
*PAM-27910
MQTT transport requires explicit configuration of SSL options for non-SSL use.
The default value for acceptUnrecognizedCertificates in the MQTT bundle has been corrected to be false rather than the empty string.
*PAM-27816
Occasional core dump in MQTTAsync_receiveThread / MQTTAsync_retry.When the MQTT transport dropped a connection to a broker due to an authentication or SSL error, on very rare occasions the transport would lock up or cause a correlator crash. This has now been fixed.
*PAM-27360
HTTPClient inserts nulls into data stream around the 100k mark.
The HTTP client with long messages could insert nulls into the message if it was longer than 100k (but not all bytes after 100k), which then caused the resulting string to appear truncated. The result would be 100k of data, then 100k of 0s, followed by more data.
This has been fixed so that all the message data is now available.
*PAM-27296
Wrong object type created when a Channel object is constructed from a context inside a Java plug-in.
In a Java plug-in, if a com.apama.epl.plugin.Channel object was constructed from a com.apama.epl.plugin.Context, the returned object type was com.apama.epl.plugin.Context - which is incorrect.
This issue has been fixed such that the expected object type is now returned.
*PAM-27150
Sending events to Kafka at a high rate can result in some being dropped.
This is a Kafka issue. The producer should not be dropping events when there is progress. The workaround is defaulting request.timeout.ms to Integer.MAX_VALUE. This value is now the default (the user can override it though).