BigMemory 4.3.7 | Product Documentation | BigMemory Max Developer Guide | Blocking and Self Populating Caches | Blocking Cache
 
Blocking Cache
Imagine you have a very busy web site with thousands of concurrent users. Rather than being evenly distributed in what they do, they tend to gravitate to popular pages. These pages are not static, they have dynamic data which goes stale in a few minutes. Or imagine you have collections of data which go stale in a few minutes. In each case the data is extremely expensive to calculate. If each request thread asks for the same thing, that is a lot of work. Now, add a cache. Get each thread to check the cache; if the data is not there, go and get it and put it in the cache.
Now, imagine that there are so many users contending for the same data that in the time it takes the first user to request the data and put it in the cache, ten other users have done the same thing. The upstream system, whether a JSP or velocity page, or interactions with a service layer or database are doing ten times more work than they need to. Enter the BlockingCache. It is blocking because all threads requesting the same key wait for the first thread to complete. Once the first thread has completed the other threads simply obtain the cache entry and return. The BlockingCache can scale up to very busy systems. Each thread can either wait indefinitely, or you can specify a timeout using the timeoutMillis constructor argument.
For more information, see the Javadoc at http://www.ehcache.org/apidocs/2.10.1/ for BlockingCache.