BigMemory 4.4.0 | Product Documentation | BigMemory Go Developer Guide | Searching a Cache | Concurrency Considerations
 
Concurrency Considerations
Unlike cache operations, which have selectable concurrency control or transactions, queries are asynchronous and search results are "eventually consistent" with the caches.
Index Updating
Although indexes are updated synchronously, their state lags slightly behind that of the cache. The only exception is when the updating thread performs a search.
For caches with concurrency control, an index does not reflect the new state of the cache until:
*The change has been applied to the cluster.
*For a cache with transactions, when commit has been called.
Query Results
Unexpected results might occur if:
*A search returns an Element reference that no longer exists.
*Search criteria select an Element, but the Element has been updated.
*Aggregators, such as sum( ), disagree with the same calculation done by redoing the calculation yourself by re-accessing the cache for each key and repeating the calculation.
*A value reference refers to a value that has been removed from the cache, and the cache has not yet been reindexed. If this happens, the value is null but the key and attributes supplied by the stale cache index are non-null. Because values in a cache are also allowed to be null, you cannot tell whether your value is null because it has been removed from the cache after the index was last updated or because it is a null value.
Recommendations
Because the state of the cache can change between search executions, the following is recommended:
*Add all of the aggregators you want for a query at once, so that the returned aggregators are consistent.
*Use null guards when accessing a cache with a key returned from a search.