Balzanka / guava-libraries

Automatically exported from code.google.com/p/guava-libraries
Apache License 2.0
0 stars 0 forks source link

AsyncLoadingCache #1350

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
TODO: fill in details from:

- internal bug 5324489
- https://groups.google.com/d/topic/guava-discuss/6E61UxtwErw/discussion
- 2012/08/20 internal thread "AsyncLoadingCache"
- some comments from issue 1122

A decision that we need to explain is "Why have an AsyncLoadingCache<K, V> 
instead of just recommending a LoadingCache<K, Future<V>>?" Here a few reason I 
encountered while skimming the above resources:

"""
I think that a Cache<K, Future<V>> is a misleading idea. It just makes too
many things surprising.

It means that Future<V> gets added to the cache immediately, and that nobody
blocks waiting for it to compute. It means that cache stats no longer
accurately reflect computation time. It means that you can get removal
notifcations for stuff that hasn't even finished computing. And to do
anything sensible with the notification you might need to wait for it to
compute. It means that Cache.asMap().containsKey(key) returns true even if
the cache doesn't yet contain the cached value. It means that Cache.size()
counts values that haven't even been computed. It means that weighted
eviction has to way an entry before it's value has even computed. It means
that weak and soft values are completely broken. You get the point. It's a
complete mess.
"""

In a discussion about how weak and soft references are broken:
"""
> I don't think the failure mode is surprising except in the
> general way that soft/weak references are surprising. It's not obvious to me
> Future<V> is different than Collection<V> in this regard.
> I'm not actually convinced getFuture() even solves this soft/weak references
> problem which makes makes things even _more) surprising

The problem is that a user's reference to the "real" value doesn't
keep the Future alive and hence permits the cache entry to be garbage
collected.  The value stored directly by the cache (in whatever type
of reference is appropriate) needs to be the same value the user would
hold on to.
"""

"""
CacheBuilderSpec#refreshAfterWrite no longer works as expected - with a normal 
value cache, Cache#get returns immediately even for slow computations (and I 
would expect getFuture() to return an immediate future in this case), while a 
future cache will return a still-loading future.  Moreover, if a computation 
fails, it will still overwrite a previous successful computation, as opposed to 
being swallowed by a value cache.
"""

There are various other concerns:
- Is asMap a map of Futures or just of values?
- AsyncLoadingCache can guarantee that get() won't block.
- getIfPresent() could return a V instead of a Future<V>.
- LoadingCache.get() throws, but there's no reason for that in 
AsyncLoadingCache, so users would always want to use getUnchecked().
- Lots more to be figured out about what happens to failures.
- loadAll interface: Does it return Map<Future>? Future<Map>? 
(Future<Map<Future>???)
- We can improve interruption behavior as discussed on issue 1122.
- We could provide clever cancellation logic: If all users awaiting a Future 
call cancel(), we can cancel the cache loading.

Original issue reported on code.google.com by cpov...@google.com on 21 Mar 2013 at 3:23

GoogleCodeExporter commented 9 years ago

Original comment by fry@google.com on 22 Mar 2013 at 11:08

GoogleCodeExporter commented 9 years ago
Issue 1490 has been merged into this issue.

Original comment by cgdecker@google.com on 2 Aug 2013 at 9:19

GoogleCodeExporter commented 9 years ago
Just my two cents, async caches are traps for (especially database related) 
deadlocks.

Let's say you use the cache for an expensive db object, and you want to 
retrieve that object while in another transaction. If you were using a regular 
cache, there will be only one thread and caller is responsible to pass the 
transaction to the cache-loading function. But, I guess, async cache will use 
seperate thread to retrieve the object and that thread will need its own 
transaction. Under stress, DB connection pool will be exhausted, and async 
cache's thread will wait forever for an available connection, which is being 
held by the caller's thread, which waits for the async cache to return, hence 
the deadlock. Which, by the way, cannot be detected by JConsole, at least with 
C3P0 as pool implementation.

ListenableFuture.addListener() might be the recommended way to use async cache 
but would hurt its practical use.

Original comment by bahri.ge...@gmail.com on 13 Dec 2013 at 8:56

GoogleCodeExporter commented 9 years ago
This issue has been migrated to GitHub.

It can be found at https://github.com/google/guava/issues/<issue id>

Original comment by cgdecker@google.com on 1 Nov 2014 at 4:12

GoogleCodeExporter commented 9 years ago

Original comment by cgdecker@google.com on 1 Nov 2014 at 4:18

GoogleCodeExporter commented 9 years ago

Original comment by cgdecker@google.com on 3 Nov 2014 at 9:08