Open Nagesh17 opened 3 years ago
Quick thought, could be there are no memory limits and the JVM just assumes the whole node resources? Remember that if you are using Java8 you'll also need to set limits to the JVM since it does not detect the ones from k8s.
It would help if you can provide an independent reproducer, the one provided is missing methods and seems to rely on Spring.
The soft references should be removed on a GC, no? Another thing, please reuse the Asciidoctor instance instead of creating a new one on each iteration. Asciidoctor and Asciidoctor-PDF are thread-safe, and initialisation of these components takes relatively a lot of time. So unless you want to render with different extensions in place please use only one Asciidoctor instance.
Also I wouldn't say that the screenshots show anything extraordinary, there is one Ruby instance, which means that previous instances were cleaned up.
Quick thought, could be there are no memory limits and the JVM just assumes the whole node resources?
Yes, very good point.
Please run kubectl describe pod ...
. It should show if the container was OOM-killed. If yes please make sure to limit the max memory size accordingly (not the heap size). At least Java 11 has the option -XX:+UseContainerSupport
to do that automatically.
Quick thought, could be there are no memory limits and the JVM just assumes the whole node resources? Remember that if you are using Java8 you'll also need to set limits to the JVM since it does not detect the ones from k8s.
It would help if you can provide an independent reproducer, the one provided is missing methods and seems to rely on Spring.
@abelsromero We have already set the container resource limit to 2 GB using the resources section in deployment.yaml: spec: template: spec: containers:
Also, the jvm options -Xmx, -Xms are set to 2GB.
Sure, but you also need to make sure that the JVM will not request more than 2GB, otherwise it will be OOM killed.
The soft references should be removed on a GC, no? Another thing, please reuse the Asciidoctor instance instead of creating a new one on each iteration. Asciidoctor and Asciidoctor-PDF are thread-safe, and initialisation of these components takes relatively a lot of time. So unless you want to render with different extensions in place please use only one Asciidoctor instance.
@robertpanzer I have already tried this. Created one single instance of asciidoctor in the constructor and re-used it in each run of pdf-generation-thread.. No improvement was seen.
Also I wouldn't say that the screenshots show anything extraordinary, there is one Ruby instance, which means that previous instances were cleaned up.
@robertpanzer This is the heap dump captured after the pdf-generator-thread finished its execution and no thread was actively in running state.
Sure, but you also need to make sure that the JVM will not request more than 2GB, otherwise it will be OOM killed.
I have done that. Edited the above comment
And why is the container killed? If there is a memory leak then there should be a large number of no longer required objects on the heap. But I don't see that in your screen dump with only one instance of org.jruby.Ruby or one instance of ThreadContext.
Please provide the output of kubectl describe pod
.
Application : Spring boot microservice Deployment env: Docker container running on openstack kubernetes Heap size: 2GB
We are using the asciidoctorj library to covert asciidoc file to pdf file using below logic: Scheduler Thread - runs every 5 minutes
The heap memory usage keeps on increasing after each run of the pdf-generation thread. The heapdumps of the application show that, there are many live "jruby related" objects. I have attached one heapdump snapshot from jprofiler.
When the heap memory is full the application fails and the docker container is automatically restarted.
Sample codeSnippet:
@Scheduled(fixedDelay = 300000, initialDelay = 25000)
public void generatePDFSchedule() {
`
// if there are merge events