Closed GoogleCodeExporter closed 8 years ago
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
[deleted comment]
Ok. Ignore my other comments. :)
The attached patch is an implementation of a caching record manager. It
features fine-grained synchronisation: reads, writes, and deletes on individual
records do not block. Global operations (defrag, commit, rollback) block and
take priority.
Caching is done via the soft reference/weak hash map method. An MRU queue can
be used if the JVM is unnecessarily clearing SoftReferences.
Included is a JUnit stress test, which sparks up 200 threads for 20 seconds and
hammers an instance of the cache. Included also is a GUI with a main() method
allowing you to experiment with the locks.
The code is threadsafe and does not use the double-locking pattern, which is
known to be faulty owing to bugs in the JVM specification. (yes, really).
The patch creates a new class - CacheRecoprdManager3 - which is not integrated
into the RecordManagerFactory as yet, and does not replace the existing
CacheRecordManager.
Original comment by pmurray....@gmail.com
on 14 Mar 2011 at 5:43
Attachments:
Hi,
sorry for very long delay (very busy in real life).
I had quick look at patch, it looks ok.
I granted you write access to SVN repository. Feel free to integrate this patch
as optional Cache. There should be an parameter to switch this on.
I will try to find time as soon as possible.
Jan
Original comment by kja...@gmail.com
on 17 Mar 2011 at 10:35
No worries. May be able to get it done over the weekend.
Original comment by pmurray....@gmail.com
on 17 Mar 2011 at 12:21
Why not use LRU concurrent map available here -
http://code.google.com/p/concurrentlinkedhashmap/#Features?
Or use what the "Master"/Doug Lea wrote -
http://gee.cs.oswego.edu/dl/jsr166/dist/extra166ydocs/extra166y/ConcurrentRefere
nceHashMap.html
Original comment by ashwin.j...@gmail.com
on 18 Mar 2011 at 1:55
Ok, I have checked in the code as revision 67.
The ConcurrentReferenceHashMap and the other objects mentioned look good. But,
I have already finished and it passes the tests, so there you go.
I have made a few more changes, in particular - the exclusive/nonexclusive
locker is broken out into a separate class. Perhaps it should be moved into a
nice little googlecode project of its own.
Changes (as per the svn update comment) are:
*Added CacheRecordManager3 and its helper classes
*Added Tests
*Plugged CacheRecordManager3 into Provider as the default soft cache
*Added some javadoc to RecordManagerOptions
*Corrected method name in TestIssues
*Altered ant build to point javadoc at the current SE6 location
I have altered the build file so that the generated javadoc will correctly link
to the SE6 javadocs at oracle. However, I have not regenerated and checked in
the javadoc in the project itself.
Original comment by pmurray....@gmail.com
on 18 Mar 2011 at 4:57
Hi,
this cache was not really faster than old one, so I reverted it back as default
option. It is still in JDBM and can be use with 'soft3' parameter.
I guess basic problem with performance is overhead. For each entry inserted two
new references has to be created. And new Long object instance has to be
created with each lookup.
Original comment by kja...@gmail.com
on 14 Apr 2011 at 6:51
Original comment by kja...@gmail.com
on 14 Apr 2011 at 6:52
Original issue reported on code.google.com by
pmurray....@gmail.com
on 28 Feb 2011 at 12:34