jankotek / mapdb

MapDB provides concurrent Maps, Sets and Queues backed by disk storage or off-heap-memory. It is a fast and easy to use embedded Java database engine.
https://mapdb.org
Apache License 2.0
4.87k stars 872 forks source link

HTreeMap + memoryDB seems to be causing mem leak #953

Open puzpuzpuz opened 4 years ago

puzpuzpuz commented 4 years ago

Greetings.

I'm using MapDB 3.0.7 with HTreeMap as an in-memory write-behind cache. It's configured like this:

DB db = DBMaker.memoryDB()
                 .make();
ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(
            r -> new Thread(r, "InMemoryStoreExpiration"));
HTreeMap<String, long[]> store = db.hashMap("test")
               .keySerializer(Serializer.STRING)
               .valueSerializer(Serializer.LONG_ARRAY)
               .expireMaxSize(200_000)
               .expireAfterCreate()
               .expireExecutor(executor)
               .expireExecutorPeriod(1_000)
               .create();

In my load test entries are upserted with store.compute() calls. 50,000 entries are upserted each minute.

Configuration: OpenJDK 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10 with -Xms128M -Xmx1G args.

After running the load test for about 4 hours, I noticed that old gen size of heap was slowly increasing. I collected a head dump at the end of the test and saw that the underlying byte[][] array was occupying more than 500MB of heap space (at the early start, when all 200K entries were present, it was around 180MB).

Then I tried changing .memoryDB() to .heapDB(), re-run the test and it seems to have fixed the problem.

Is there a chance that there is something in my scenario that was causing the mem leak for memoryDB mode? Or it sounds more like a bug?

icreator commented 4 years ago

use clearCache() in db.getEngine()