nassim-git / project-voldemort

Automatically exported from code.google.com/p/project-voldemort
Apache License 2.0
0 stars 0 forks source link

Improved CacheStorageConfiguration #225

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
As brought up in http://groups.google.com/group/project-
voldemort/browse_thread/thread/566693d8db90ebbd/05a6d6fc255dd628 I've written 
an improved 
CacheStorageConfiguration which doesn't suffer from complete cache eviction 
when Java does a full gc.

The initial code drop can be seen @ 
http://github.com/Omega1/voldemort/commit/886f6a5f65c452f1bc8d55373d33d1e114fbb3
b8

Latest code:

http://github.com/Omega1/voldemort/blob/master/src/java/voldemort/store/memory/C
acheStorageConfiguration.java
http://github.com/Omega1/voldemort/blob/master/src/java/voldemort/store/memory/C
oncurrentLinkedHashMap.java
http://github.com/Omega1/voldemort/blob/master/src/java/voldemort/server/Voldemo
rtConfig.java

Original issue reported on code.google.com by bruce.ri...@gmail.com on 2 Mar 2010 at 7:24

GoogleCodeExporter commented 9 years ago

Original comment by feinb...@gmail.com on 2 Mar 2010 at 7:29

GoogleCodeExporter commented 9 years ago

Original comment by feinb...@gmail.com on 17 Mar 2010 at 1:41

GoogleCodeExporter commented 9 years ago
branch for this issue created at 
http://github.com/Omega1/voldemort/tree/issue225

Original comment by bruce.ri...@gmail.com on 19 Mar 2010 at 10:55

GoogleCodeExporter commented 9 years ago
Hi Bruce,

I noticed you're using a modified version of ConcurrentLinkedHashMap by 
Benjamin Manes. Is it possible to for us to just add it as a dependancy jar and 
put a wrapper around it rather than modify the internals? Project work is still 
ongoing on it and we may benefit from improvements.

Thanks,
- Alex

Original comment by feinb...@gmail.com on 24 Aug 2010 at 5:45

GoogleCodeExporter commented 9 years ago
Possibly. I've talked with Ben Manes about his ongoing work and I think the use 
case I'm trying to solve isn't one that is totally compatible with the 
direction that he is taking the library. That being said I think it's possible 
that the required functionality might be able to be handled via a wrapper, I 
just haven't taken the time to try that approach.

Original comment by bruce.ri...@gmail.com on 24 Aug 2010 at 8:05

GoogleCodeExporter commented 9 years ago
(Found by Google alert)

Bruce's changes are based on an older version of the CLHM library. The release 
version, v1.0, is a complete rewrite based on a different design. I would be 
open to changes in v1.1 to make this possible.

The cleanest approach seems to be to allow a pluggable evaluator for capacity 
handling. The current design is based on a weighted maximum. Instead it would 
be percentage based with a different evaluation scheme. Feel free to open a 
feature request on my tracker, link the two, and propose the interface for the 
predicate.

Original comment by Ben.Manes@gmail.com on 8 Sep 2010 at 2:01

GoogleCodeExporter commented 9 years ago
Hi Ben,

Thanks for suggestion (and thanks for the library, we use it in other places in 
LinkedIn's code base!).

Bruce, since you have more context into this, could you like to file a issue 
with Ben's tracker?

- Alex

Original comment by feinb...@gmail.com on 11 Sep 2010 at 2:53

GoogleCodeExporter commented 9 years ago
Yes, I'll do that.

Original comment by bruce.ri...@gmail.com on 14 Sep 2010 at 9:58

GoogleCodeExporter commented 9 years ago
Filed as http://code.google.com/p/concurrentlinkedhashmap/issues/detail?id=19

Original comment by bruce.ri...@gmail.com on 14 Sep 2010 at 10:16

GoogleCodeExporter commented 9 years ago
Fixed for v1.1. Please review for code/test quality, interface definition 
(better names?), documentation, and that it satisfies your requirements. I plan 
to release v1.1 soon.

Also note that we have been porting the algorithms into Google Guava's 
MapMaker. At the moment this includes the concurrent LRU (#maximumSize()), 
eviction listener, and on-access expiration (concurrent time-based LRU). These 
work with the other map settings, such as soft references (e.g. maxSize + 
soft-ref fallback). Planned work includes bulk memoization, but not pluggable 
size limiters or weighted values (no consensus on what a Multimap cache means).

Consider using MapMaker by default when acceptable since it has wider support. 
I will continue to maintain and improve CLHM, with new features as exploratory 
for potential inclusion into MapMaker. CLHM will not be a superset of MapMaker, 
though, as I do not intend to port its capabilities back.

Original comment by Ben.Manes@gmail.com on 24 Oct 2010 at 4:36

GoogleCodeExporter commented 9 years ago
Released v1.1. This is now available for JDK5 and JDK6 in the download section 
or through Maven.

See CapacityLimiter for supporting your requested feature.

Original comment by Ben.Manes@gmail.com on 4 Nov 2010 at 5:50

GoogleCodeExporter commented 9 years ago
If this enhancement isn't being used and there is not pressing need for it, 
then I'd like to take the opportunity to remove the CapacityLimiter from CLHM 
v1.2. 

I've grown less and less comfortable with it, as is, and don't particularly 
like the API. I'd be more inclined to be willing to evaluate accepting a patch 
to add that functionality natively or allowing a fork to spearhead that 
direction. Many assumptions with the lock amortization design may not make 
sense in a heap-bounded variant.

Original comment by Ben.Manes@gmail.com on 4 Mar 2011 at 4:23