jankotek / mapdb

MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
https://mapdb.org
Apache License 2.0
4.89k stars 872 forks source link

expireStoreSize() is not working with fileDB #945

Open vipinmpd08 opened 5 years ago

vipinmpd08 commented 5 years ago

Hi, thanks for such a beautiful lightweight API. Just got one question related to fileDB expiry/overflow. My intention is to evict old items if the diskMap.db size becomes more than 500 MB in disk. But this doesnt seems to be working and I see the diskMap.db size continuously growing. Any suggestions ?

    val fileDB = DBMaker.fileDB("diskMap.db")
        .fileMmapEnable()
        .make()

    val diskMap = fileDB
        .hashMap("diskMap", Serializer.STRING, Serializer.BYTE_ARRAY)
        .expireStoreSize(500 * 1024 * 1024)
        .createOrOpen()
sunyaf commented 5 years ago

hello,do you meet this problem java.lang.NoSuchMethodError: org.mapdb.elsa.ElsaSerializerPojo.(I[Ljava/lang/Object;Ljava/util/Map;Ljava/util/Map;Ljava/util/Map;Lorg/mapdb/elsa/ElsaClassCallback;Lorg/mapdb/elsa/ElsaClassInfoResolver;)V

pY4x3g commented 4 years ago

I am facing the same problem, afaik expireStoreSize and expireMaxSize are not working for me for disk and memory db. I dont know what i am doing wrong. Here is the code which should terminate, but it is running until heap space is full...

        DB dbDisk = DBMaker
                .fileDB("mapdbfile")
                .fileMmapEnableIfSupported()
                .make();

        DB dbMemory = DBMaker
                .memoryDB()
                .make();

        HTreeMap onDisk = dbDisk
                .hashMap("onDisk")
                .expireStoreSize(50 * 1024 * 1024) // 100 MB
                .expireMaxSize(100)
                .expireExecutor(Executors.newScheduledThreadPool(2))
                .createOrOpen();

        HTreeMap inMemory = dbMemory
                .hashMap("inMemory")
                .expireStoreSize(10 * 1024 * 1024)
                .expireMaxSize(10)
                .expireOverflow(onDisk)
                .expireExecutor(Executors.newScheduledThreadPool(2))
                .create();

        inMemory.put("test", 2);

        System.out.println("" + inMemory.get("test"));

        int i = 0;
        while (onDisk.get("test") == null) {
            i++;
            System.out.println("not found... " + i + " " + inMemory.getSize() + " " + onDisk.size());
            byte[] bytes = new byte[1 * 1024 * 1024];
            new Random().nextBytes(bytes);
            inMemory.put(i, bytes);
            Thread.sleep(10000);
            System.out.println(" " + inMemory.getExpireMaxSize());
        }

        inMemory.close();
        onDisk.clear();
        onDisk.close();
pY4x3g commented 4 years ago

I found the solution, I thought that the eviction is always triggered after a put operation, but you have to activate the eviction process by using expireAfterCreate() without long which will activate the eviction after each put operation. Using it in addition with expireExecutor the eviction will be done by separate processes. I would like to trigger the eviction by myself but unfortunately the expireEvict() method is not working for me, but for the beginning the expireAfterCreate is sufficient for me. Would be great if this would be in the documentation since I had to dig through the code to understand the eviction process.