Considerations for Configuring a Distributed Cache
Keep the following points in mind when configuring a distributed cache.
A distributed cache accepts only serializable keys and values. If a service attempts to put a non-serializable object into the cache at run time, the service will receive an exception.
A distributed cache uses the disk store on the
Terracotta Server Array, not on the
Integration Server. You cannot configure the portion of distributed cache that resides on the
Integration Server to overflow or persist to disk. When you create a distributed cache using
Integration Server Administrator, the
Overflow to Disk or
Disk Persistent settings are disabled.
An
Integration Server references a cache on the
Terracotta Server Array using a fully qualified cache name. The fully qualified name of a cache consists of the name of the cache manager and the name of the cache. For example, the fully qualified name of a cache called “OrderDetails” in a cache manager called “Orders” is “Orders.OrderDetails.” If you have multiple
Integration Servers that share a distributed cache, be sure that they all use the same fully qualified name for the cache.
To create a distributed cache that will be shared by multiple
Integration Servers, you first create and enable the cache on one of the
Integration Servers. This step registers the distributed cache on the
Integration Server and also creates the cache on the
Terracotta Server Array. Then you add the distributed cache to each additional
Integration Server that will use the cache. When you enable the distributed cache on these
Integration Servers, they will see that the cache already exists on the
Terracotta Server Array and will begin using it.
For a
Terracotta Server Array you can set failover behavior to consistency instead of availability (the default). For information, see the section on failover tuning for guaranteed consistency in the
BigMemory Max Administrator Guide.