Usage Notes for Horizontal Scalability
JMS/JNDI Configuration
Although you would use an HS-specific RNAME when creating a JMS connection factory, you should use a regular RNAME URL when creating the JNDI initial context to bind and look up JNDI entries. For example: if the landscape consists of stand-alone Universal Messaging servers UM1 and UM2, and a two-node cluster with Universal Messaging server UM3 and UM4, you would have the following considerations:
When creating the JNDI assets, the operations will need to be executed for each realm and/or cluster expected to be used by JMS in the HS environment.
Whilst creating the JNDI assets, set the provider URL for the Initial Context per realm and/or cluster. For example, for the first initial context use a provider URL of "UM1", for the second initial context use a provider URL of "UM2", and for the cluster provide a URL of "UM3,UM4".
When instantiating the JNDI context for use, set the provider URL as: "UM1, UM2, UM3, UM4". Thus if a realm is unavailable, you can move to the next available URL for the JNDI Context.
When creating and binding JMS Connection Factories, instantiate them with an HS RNAME, e.g. "(UM1)(UM2)(UM3,UM4)". The JMS connections created from this connection factory will utilize the HS feature.
Store configuration consistency
In order for the HS feature to function correctly, you should ensure the consistency of Universal Messaging channels and queues across the HS landscape. In other words, if you want to use a channel named "myChannel" in an HS manner, it should be present with the same channel setting across all Universal Messaging servers in the HS landscape. This can be achieved by either manually creating the channels/queues with identical configuration on all nodes, or by using an HS native nSession or JMS connection to create the channels/queues. When creating stores using such a session/connection, channels/queues will not be created on nodes which are currently offline.
Unavailable servers
If a Universal Messaging Server in the HS configured nSession or JMS Connection is unavailable at the start of the session, or if a server becomes unavailable during the session for whatever reason, the session will automatically try to re-establish the physical connection to the server. If the unavailable server becomes available again, the HS session will try to resume publish/consume operations on that node as long as it contains the required stores - if the stores are not available, a log entry will be printed in the logs.
If the session is able to connect to at least one node in the HS Landscape, then the session will be in a connected and usable state.
If all the configured servers become unavailable, then the nSession or JMS Connection will be closed. To recover from this situation, you need to restart at least one of the servers, then restart the client sessions.
If a channel/queue creation or consumer operation is executed while any of the configured physical connections are not available, then this operation will not affect the offline nodes. Thus the channel/queue would need to be manually created and validated across the HS environment and the nSession or JMS Connection destroyed and restarted.
Using Pause Publishing
When one of the servers in the HS session is configured for pause publishing, the clients that attempt transactional publishing to the server will receive nPublishPausedException. The rest of the servers will process the transactions and the events will be published successfully.
When clients use non-transactional publishing, events sent to the server with pause publishing enabled are lost. To ensure that the client is notified that the events are not published successfully, create an asynchronous exception listener for the HS session.
Transactional Publishing When a Server is Unavailable
When a client is performing multi-threaded transactional publishing of messages using an HS session, and one of the Universal Messaging servers becomes unavailable during the transaction, some messages might remain unpublished. In addition, the client returns an nSessionNotConnectedException. The exception notifies you about the delivery state of the messages and suggests that you retry the transaction.
Message ordering and duplicate detection
The HS feature does not provide any guarantees for the order of events being produced/published and consumed. The nSession or JMS Connection will pass incoming events to the consumers as soon as they are available and in a multi-server landscape this order is not guaranteed. The order of the events through a single server/cluster is still maintained.
The publishing is done in a round-robin fashion with each individual event being sent to the next available node. The same applies for transactions of events - all of the events in a single transaction will be sent to the same node, the events in the next transaction to the next node in the HS landscape and so on.
There is greater chance of duplicate messages if the client is not doing individual acknowledgement of messages within a HS environment. This is due to a side effect of transactional consuming, whereby an acknowledgement may return successful in the consumer API but a physical connection may be unavailable and the events that were outstanding will be redelivered.
HS has no ability to detect duplicate events, so if you were to produce the two streams of events with the same channel/queue name on two independent servers or clusters, then a client consuming using HS will receive duplicate events.
Durable Subscribers
As stated above, message ordering is not guaranteed when producing messages due to the round-robin behavior of an HS producer. This holds especially true when both the producing and consuming of events is done through an HS enabled session. This means that while the individual durable instance for a specific server or cluster will continue to uphold any and all guarantees correctly, it is only at the point after it has been received into a server or cluster that it can maintain the order of events, and that is only for events it has received. HS consumer events will be interleaved with other realms and/or clusters events which will change the original HS producer's order of events. This is especially valid for the Serial durable subscribers, whereby an HS producer should not be used if the order of events is expected to be in the order of the producer/publisher submitting them. For example, in a simple two server HS environment UM1, UM2, we produce events 1, 2, 3, 4.The server UM1 receives events 1 and 3 and the server UM2 receives events 2 and 4. Upon consuming, we can expect to receive them in any of these combinations: 1-3-2-4, 1-2-3-4, 1-2-4-3, 2-4-1-3, 2-1-4-3, 2-1-3-4.
For related information on durable subscriptions, see the section
Durable Subscriptions.
Event Identifiers
In the native API, clients get event identifiers using the call nConsumeEvent.getEventID(). In the API for JMS they are hidden, though are used for event identification during processing. Within HS these identifiers are generated in the client API with the original event identifier tracked within the HS layer to facilitate event processing like acknowledgment and rollback. The important part to understand with generating the usable ID outside a server instance is that APIs that try to subscribe using event identifiers at a certain point within a channel/queue will be ignored and will always start at the beginning. This may cause some duplication of events when not using durables or using the exclusive durable type in the native API.
Server Ports/Interfaces
An HS session supports connecting to various connection interfaces, e.g. NHP/NHPS/NSP/NSPS, and you can use a combination of these in a single HS RNAME:
nSessionAttributes sessionAttributes = new nSessionAttributes(
"(nsps://host1:9000)(nsp://host2:9000)(nhp://host3:9000,nhp://host4:9000)");
nSession hsSession = nSessionFactory.create(sessionAttributes);
hsSession.init();
Logging
HS related log entries will be produced in the client application's log with various log levels. The log entries for the HS layer are prefixed with the "HS>" string. Using the TRACE log level in an HS client will log each nConsumeEvent received from the servers and will substantially increase the amount of information logged.