crenique / treapdb

Automatically exported from code.google.com/p/treapdb
0 stars 0 forks source link

Treap cache implementation can exhaust heap space #10

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
TreapDB uses a number of LRU caches. As the size of data that will be stored on 
these caches can't be predicted in advance and as the data structures use hard 
references you can end up with the JVM stopping with an OutOfMemmory error.

The hard references to objects should be replaced with softreferences that can 
be garbage collected if the JVM is running short of memory. For BlockUtils.java 
the nodeCache should be declared as follows:-

private Map<Integer,SoftReference> nodeCache =  new LRUMap<Integer, 
SoftReference>(100000);

To Write a node

public void writeNode(int pos,DiskTreapNode<K,V> node,boolean changeValue) 
throws Exception{
    SoftReference<DiskTreapNode<K,V>> softReference = new SoftReference<DiskTreapNode<K,V>>(node);
    nodeCache.put(pos, softReference);
...
}

and to Read a node

@SuppressWarnings("unchecked")
public DiskTreapNode<K,V>  readNode(int pos,boolean loadValue) throws Exception 
{
    SoftReference<DiskTreapNode<K,V>> tmp;
    if(!loadValue &&  (tmp= nodeCache.get(pos))!=null){
        DiskTreapNode<K,V> node = tmp.get();
        if (node != null) {
            return node;
        }
    }
...
    if (node.value == null) {
        SoftReference<DiskTreapNode<K,V>> softReference = new SoftReference<DiskTreapNode<K,V>>(node);
        nodeCache.put(pos, softReference);
    }
...
}

Original issue reported on code.google.com by david.ge...@gmail.com on 9 Feb 2012 at 10:01

GoogleCodeExporter commented 8 years ago
Thanks very much, I have used your patch to fix the bug. 

Original comment by ccnu...@gmail.com on 13 Feb 2012 at 6:06

GoogleCodeExporter commented 8 years ago
Just for information, I'm using treapdb as the storage on my CMS called 
Magneato. I did some performance tests with the small Wikipedia English 
database (about 100k pages) and was impressed with the speed but this is how I 
spotted this issue.

Original comment by david.ge...@gmail.com on 19 May 2012 at 2:14