ben-manes / concurrentlinkedhashmap

A ConcurrentLinkedHashMap for Java
Apache License 2.0
470 stars 113 forks source link

Cache capacity limitation of being an Integer #33

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. use ByteArrayWeigher
2. set the capacity to > 2 * 1024 * 1024 * 1024
3.

What is the expected output? What do you see instead?
It should work as expected as we have available memory.

What version of the product are you using? On what operating system?
1.2

Please provide any additional information below.
following exception is thrown. 
Caused by: java.lang.IllegalArgumentException
    at com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Builder.initialCapacity(ConcurrentLinkedHashMap.java:1647)
    at org.apache.cassandra.cache.SerializingCache.<init>(SerializingCache.java:63)
    at org.apache.cassandra.cache.SerializingCacheProvider.create(SerializingCacheProvider.java:33)
    at org.apache.cassandra.service.CacheService.initRowCache(CacheService.java:129)
    at org.apache.cassandra.service.CacheService.<init>(CacheService.java:88)
    at org.apache.cassandra.service.CacheService.<clinit>(CacheService.java:63)

the fix may be is to make the capacity variable to long insted of int.

Original issue reported on code.google.com by vijay2...@gmail.com on 13 Apr 2012 at 8:29

GoogleCodeExporter commented 9 years ago
This is a duplicate of issue 31.

The usual solution is to change from a unit == 1 byte to a larger sector size, 
such as 1kb.

I think a long capacity is fine, but I'm not actively working on a next release 
to roll this into soon. If this is critical than it could be a patch release. 
You are of course welcome to fork if neither of those options are okay.

I helped my former colleagues at Google with Guava's CacheBuilder (formerly 
MapMaker), which could be considered the successor to this project. There the 
maximum weight is a long.

Original comment by Ben.Manes@gmail.com on 13 Apr 2012 at 10:38

GoogleCodeExporter commented 9 years ago
Hi Ben, 
The problem is that we dont usually have the unit to be 1Kb or in a constant 
chunks and guava doesn't provide us with descendingKeySetWithLimit method.

Looks like long fixes the problem but breaks most of the test's. 
I can fix it but is this something of your intrest? 
Other option is to move hasOverflowed() extendable and be set via builder so 
the extension can count and manage the weightedSize of their own.

Original comment by vijay2...@gmail.com on 16 Apr 2012 at 10:10

GoogleCodeExporter commented 9 years ago
>>> Other option is to move hasOverflowed() extendable and be set via builder 
so the extension can count and manage the weightedSize of 

Attached patch allows users to specify capacity in KB, MB and bytes so the 
existing limitation can be removed. Thanks!

Original comment by vijay2...@gmail.com on 18 Apr 2012 at 4:53

Attachments:

GoogleCodeExporter commented 9 years ago

Original comment by vijay2...@gmail.com on 18 Apr 2012 at 5:10

Attachments:

GoogleCodeExporter commented 9 years ago
Fixed in v1.3. I plan on releasing this tonight.

Also introduced EntryWeigher<K, V> to allow key/value weighing. We fixed this 
oversight in Guava's CacheBuilder from the get-go. 

I believe Cassandra wanted entry weighers too, but it wasn't high priority (no 
bug filed). Please consider adopting it when you upgrade the library.

Original comment by Ben.Manes@gmail.com on 8 May 2012 at 9:45