BigMemory 4.4.0 | FAQ | Configuration Questions
 
Configuration Questions
Where is the source code?
BigMemory Go is not an open-source product. See for an open-source caching project.
Can you use more than one instance of BigMemory Go in a single JVM?
Yes. Create a CacheManager using new CacheManager(...) and keep hold of the reference. The singleton approach, accessible with the getInstance(...) method, is still available too. However, hundreds of caches can be supported with one CacheManager, so use separate CacheManagers where different configurations are needed. The Hibernate Provider has also been updated to support this behavior.
What elements are mandatory in ehcache.xml?
See the file ehcache.xsd in the BigMemory Go kit for the latest information on required configuration elements.
How is auto-versioning of elements handled?
Automatic element versioning works only with memory-store caches only. BigMemory Go does not use auto-versioning.
To enable auto-versioning, set the system property net.sf.ehcache.element.version.auto to true (it is false by default). Manual (user provided) versioning of cache elements is ignored when auto-versioning is in effect. Note that if this property is turned on for one of the ineligible caches, auto-versioning will silently fail.
How do I get a memory-only store to persist to disk between JVM restarts?
BigMemory Go offers fast, robust disk persistence set through configuration. For details, see the FRS description at About Fast Restart (FRS) in the BigMemory Go Configuration Guide.
What is the recommended way to write to a database?
There are two patterns available: write-through and write-behind caching. In write-through caching, writes to the cache cause writes to an underlying resource. The cache acts as a facade to the underlying resource. With this pattern, it often makes sense to read through the cache too. Write-behind caching uses the same client API; however, the write happens asynchronously. For details, see About Write-Through and Write-Behind Caches in the BigMemory Go Developer Guide.
While file systems or a web-service clients can underlie the facade of a write-through cache, the most common underlying resource is a database.
Can I use BigMemory Go as a memory store only?
Yes. Just set the persistence strategy (in the <cache> configuration element) to "none":
<cache>
...
<persistence strategy="none"/>
...
</cache>
Can I use BigMemory Go as a disk store only?
No. However, you can minimize the usage of memory using sizing configuration. For details, see Sizing Storage Tiers in the BigMemory Go Configuration Guide.
Is it thread-safe to modify element values after retrieval from a store?
Remember that a value in an element is globally accessible from multiple threads. It is inherently not thread-safe to modify the value. It is safer to retrieve a value, delete the element and then reinsert the value.
The UpdatingCacheEntryFactory does work by modifying the contents of values in place in the cache. This is outside of the core of BigMemory Go and is targeted at high performance CacheEntryFactories for SelfPopulatingCaches. For details, see the Ehcache Javadoc at http://ehcache.org/apidocs/2.10.1/net/sf/ehcache/Cache.html#getQuiet%28java.io.Serializable%29.
Can non-serializable objects be stored?
Non-serializable object can be stored only in the BigMemory Go memory store (heap). If an attempt is made to overflow a non-serializable element to the BigMemory Go off-heap or disk stores, the element is removed and a warning is logged.
What is the difference between TTL, TTI, and eternal?
These three configuration attributes can be used to design effective data lifetimes. Their assigned values should be tested and tuned to help optimize performance. timeToIdleSeconds (TTI) is the maximum number of seconds that an element can exist in the store without being accessed, while timeToLiveSeconds (TTL) is the maximum number of seconds that an element can exist in the store whether or not is has been accessed. If the eternal flag is set, elements are allowed to exist in the store eternally and none are evicted. The eternal setting overrides any TTI or TTL settings. For details, see Managing Data Life in the BigMemory Go Configuration Guide.
These attributes are set in the configuration file per cache. To set them per element, you must do so programmatically. For information, see the Javadoc for the Element class at http://ehcache.org/apidocs/2.10.1/net/sf/ehcache/Element.
If null values are stored in the cache, how can my code tell the difference between "intentional" nulls and non-existent entries?
Your application is querying the database excessively only to find that there is no result. Since there is no result, there is nothing to cache. To prevent the query from being executed unnecessarily, cache a null value, signaling that a particular key doesn't exist.
In code, checking for intentional nulls versus non-existent cache entries may look like:
// cache an explicit null value:
cache.put(new Element("key", null));
Element element = cache.get("key");
if (element == null) {
// nothing in the cache for "key" (or expired) ...
} else {
// there is a valid element in the cache, however getObjectValue() may be null:
Object value = element.getObjectValue();
if (value == null) {
// a null value is in the cache ...
} else {
// a non-null value is in the cache ...
}
}
The cache configuration in ehcache.xml may look similar to the following:
<cache
name="some.cache.name"
maxEntriesLocalHeap="10000"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
/>
Use a finite timeToLiveSeconds setting to force an occasional update.
How many threads does BigMemory Go use, and how much memory does that consume?
The amount of memory consumed per thread is determined by the Stack Size. This is set using -Xss.
What happens when maxEntriesLocalHeap is reached? Are the oldest items expired when new ones come in?
When the maximum number of elements in memory is reached, the Least Recently Used (LRU) element is removed. "Used" in this case means inserted with a put or accessed with a get. The LRU element is flushed asynchronously to the off-heap store.
Why is there an expiry thread for the disk store but not for the other stores?
Because the in-memory data is allowed a fixed maximum number of elements or bytes, it will have a maximum memory use equal to the number of elements multiplied by the average size. When an element is added beyond the maximum size, the LRU element gets flushed to the disk store. Running an expiry thread in memory turns out to be a very expensive operation and potentially contentious. It is far more efficient to only check expiry when need rather than explicitly search for it. The tradeoff is higher average memory use.
The disk-store expiry thread keeps the disk clean. There is hopefully less contention for the disk store's locks because commonly used values are in memory. If you are concerned about CPU utilization and locking in the disk store, you can set the diskExpiryThreadIntervalSeconds to a high number, such as 1 day. Or, you can effectively turn it off by setting the diskExpiryThreadIntervalSeconds to a very large value.
What eviction strategies are supported?
LRU, LFU and FIFO eviction strategies are supported.
How does element equality work in serialization mode?
An element (key and value) in BigMemory is guaranteed to .equals() another as it moves between stores.
Can you use BigMemory Go as a second-level cache in Hibernate and BigMemory Go outside of Hibernate at the same time?
Yes. You use one instance of BigMemory Go with one ehcache.xml. You configure your caches with Hibernate names for use by Hibernate. You can have other caches which you interact with directly, outside of Hibernate.