BigMemory Go 4.3.7 | Product Documentation | BigMemory Go Developer Guide | Write-Through and Write-Behind Caches | Using a Combined Read-Through and Write-Behind Cache
 
Using a Combined Read-Through and Write-Behind Cache
For applications that are not tolerant of inconsistency, the simplest solution is for the application to always read through the same cache that it writes through. Provided all database writes are through the cache, consistency is guaranteed.
The following aspects of read-through with write-behind should be considered:
Lazy Loading
The entire data set does not need to be loaded into the cache on startup. A read-through cache uses a CacheLoader that loads data into the cache on demand. In this way the cache can be populated lazily.
Caching of a Partial Dataset
If the entire dataset cannot fit in the cache, then some reads will miss the cache and fall through to the CacheLoader which will in turn hit the database. If a write has occurred but has not yet hit the database due to write-behind, then the database will be inconsistent. The simplest solution is to ensure that the entire dataset is in the cache. This then places some implications on cache configuration in the areas of expiry and eviction.
Eviction
Eviction or flushing of elements, occurs when the maximum elements for the cache have been exceeded. Be sure to size the cache appropriately to avoid eviction or flushing. See "Sizing Storage Tiers" in the Configuration Guide for BigMemory Go.
Expiry
Even if all of the dataset can fit in the cache, it could be evicted if elements expire. Consequently, you should set both the timeToLive and timeToIdle properties to eternal ("0") to prevent this from happening.

Copyright © 2010-2019 | Software AG, Darmstadt, Germany and/or Software AG USA, Inc., Reston, VA, USA, and/or its subsidiaries and/or its affiliates and/or their licensors.