Closed drerik closed 6 years ago
So. Does this mean that lib cache is used incorrectly (never cleaned)? Or is this just a natural behaviour and the memory will be cleaned by the OS?
Lets verify this by creating a test setup. Should be quite easy to verify.
@drerik says it's critical, so I'm adding it to the sprint and assigning to @runarmyklebust
I cannot find any indications that this is related to the lib-cache, not by code-review or testing with lots of data. There may ofc be something I dont manage to replicate, but I think the problem lies elsewhere.
Makes sense, we'll have to profile this to get better insight then!
@runarmyklebust should we close this if it's not lib-cache that is an issue?
Yes
We have a lot of installations that "eats" up native memory while heap mem usage is not increasing. This makes the os kill the jvm on memory allocation or other processes that allocates memory is not allowed to start.
For all the installations of customer1, the problem seems to be related to traffic as their test server does not see the same issue. From what i know they are using lib-cache to speed up page generation/viewing.
But on the customer2-prod and customer2-test installations we see that the memmory is incresing every hour on the hour. And also on the test environment which do not have any visiting traffic. According to the partner that wrote the code for customer2 they import data every hour on the hour and store it in a cache object for 24 hours.
One thing I have learnd is that java.nio.ByteBuffers.allocateDirect() puts objects outside of the normal heap space. From https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html :
From a java heap dump I also found references to ByteBuffer[].
Enonic XP log from when the crontjob:
the code that runs this task will be given on request.
Memory usage on server and in the jvm:
Snapshot of grafana data: https://metrics.enonic.io/dashboard/snapshot/BVkDQgqBsotba6TU5VdTpv1F1VP8YmIE