jamesbrowder / guava-libraries

Automatically exported from code.google.com/p/guava-libraries
Apache License 2.0
0 stars 0 forks source link

Please don't remove MapMaker.maximumSize #744

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Please don't remove MapMaker.maximumSize(). I have a use case for it that I 
can't replace with the new Cache.

Consider this: I have a map with a maximum size that I use to cache IDs of 
unique rows in a SQL table. The SQL table has several (1 or more) key columns, 
and can have several additional non-key columns (0 or more). The cache is keyed 
by a List<Object> of the key columns, but if the element is not found, I need 
to insert the non-key columns too. So I have:

private final ConcurrentMap<<List<Object>,Long> = new 
MapMaker().maximumSize(CACHE_CAPACITY).makeMap();

...

List<Object> keys = ...
List<Object> nonKeys = ...
Long dbId = cache.get(keys);
if(dbId == null) {
    dbId = getIdOrInsertRow(keys, nonKeys);
    cache.putIfAbsent(keys, dbId);
}

Basically, if the database ID of the row is not in the database, the 
getIdOrInsertRow() goes and fetches it, *or* if this is an entirely new value 
not yet present in the database, it inserts the row, and for that, it needs to 
use the non-key values too. This is where I can't use a guava Cache (or the 
deprecated computingMap, for that matter): there's no way (except using a 
thread local - ugly!) to communicate the non-keys to the CacheLoader.load(); I 
_absolutely_ need to use the get/putAbsentIf combination.

Of course, I could go back to just using a synchronized LinkedHashMap with 
finite capacity, but I specifically liked the additional concurrency I got from 
MapMaker…

Original issue reported on code.google.com by szege...@gmail.com on 7 Oct 2011 at 12:06

GoogleCodeExporter commented 9 years ago
Could you use some key object that contains both the keys list and the non-keys 
list but only uses the keys for equality? The CacheLoader would then have the 
non-keys it needs to do the insert. It could even clear its references to the 
non-keys after inserting if that were desired.

  Row row = new Row(keys, nonKeys);
  Long dbId = cache.get(row);

Not saying that's the best solution necessarily, but it seems like it should 
work.

Original comment by cgdec...@gmail.com on 7 Oct 2011 at 3:24

GoogleCodeExporter commented 9 years ago
Yes, your solution would work, although I dislike needing to have a separate 
reference field in the "Row" class that'll mostly be null; it's a waste of 
space and it is not conceptually different than passing the non-keys in a 
thread local; both are working "out of band" with regard to the semantics of 
the caching API - which presumes values are always computable from the key - 
and therefore the approach does feel like a kludge/workaround. I'm in a 
situation where I had something that worked well with your MapMaker API, and 
now I'm being forced into a workaround.

I definitely like my current implementation better as I feel it is simpler and 
explicit. I hope that I was able to demonstrate that there are legitimate uses 
for a concurrent LRU map that is explicitly managed by the application code, 
therefore MapMaker.maximumSize() still has its merit.

Original comment by szege...@gmail.com on 7 Oct 2011 at 6:12

GoogleCodeExporter commented 9 years ago
We did jump the gun a bit with the deprecation.  We aren't going to remove 
MapMaker.maximumSize() until its use cases are adequately covered by Cache.

Two things:

1. In release 10.0.1 which should be out early next week, we have enabled 
Cache.asMap().put() (plus putIfAbsent, etc.).  

2. In release 11.0 there will also be Cache.get(K, Callable<V> valueLoader).

Both show that we do not "presume values are always computable from the key".

You've concluded we shouldn't remove MapMaker.maximumSize() but it doesn't seem 
like you're aware of where we're going with Cache.

Already in head (while it might not have percolated out yet), we 

Original comment by kevinb@google.com on 7 Oct 2011 at 10:24

GoogleCodeExporter commented 9 years ago
So your code could become

final List<Object> keys = ...
final List<Object> nonKeys = ...
Long dbId = cache.get(keys,
    new Callable<Long> {
      public Long call() {
        return getIdOrInsertRow(keys, nonKeys);
      }        
    });

The anonymous class does mean you have the same overall amount of bulk as 
before (but hey, JDK 8 will fix that...), but it is simpler.

Anyway, we're sorry for deprecating a touch too soon.

Original comment by kevinb@google.com on 7 Oct 2011 at 10:30

GoogleCodeExporter commented 9 years ago
You're right that I wasn't aware of where you're going with Cache -- I was 
using 10.0.0. It actually felt like a consistent design to not have put() in 
the Cache interface, so I assumed that's a final design. If we'll have 
putIfAbsent() in Cache, that solves my problem perfectly - thanks.

Out of curiosity, though, if you have put and putIfAbsent in a Cache, it's 
becoming even closer to resembling a Map, so what's the most significant 
discriminator between Cache and Map? Do you have a document somewhere that 
explains it?

Original comment by szege...@gmail.com on 7 Oct 2011 at 10:31

GoogleCodeExporter commented 9 years ago
Not Cache.put() and Cache.putIfAbsent():  Cache.asMap().put() and 
Cache.asMap().putIfAbsent().  They're a level indirected and as de-emphasized 
as we can make them.  We assume people we use them only when they really need 
to.

The most significant discriminator?  I mean, just look at all those fancy 
methods on Cache, none of which exist on Map.

Original comment by kevinb@google.com on 8 Oct 2011 at 2:36

GoogleCodeExporter commented 9 years ago
In reply to comment 4: this code actually looks nice conceptually. Thanks!

Original comment by szege...@gmail.com on 9 Oct 2011 at 12:26

GoogleCodeExporter commented 9 years ago
This issue has been migrated to GitHub.

It can be found at https://github.com/google/guava/issues/<id>

Original comment by cgdecker@google.com on 1 Nov 2014 at 4:15

GoogleCodeExporter commented 9 years ago

Original comment by cgdecker@google.com on 3 Nov 2014 at 9:09