Closed danizen closed 3 years ago
Often OOM exceptions can't be recovered from and as such cannot be handled reliably. The JVM application state is already compromised the moment you get this and killing/restarting with more memory is usually the best approach.
Still, if you want to prevent hangs, the best options likely is to use that JVM trick with a kill
command or equivalent (from Oracle JVM documentation):
-XX:OnOutOfMemoryError="<cmd args>; <cmd args>"
As of Java 8u92, you can also use those JVM argument (described here):
-XX:ExitOnOutOfMemoryError
-XX:CrashOnOutOfMemoryError
The next major release will require Java 8 so the launch scripts shipped with the collector may be modified to include one of the Java 8 arguments.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
In https://github.com/Norconex/collector-http/issues/477, I diagnosed a serial problem where my crawling job experienced a Fatal
OutOfMemoryError
, and then later an attempt to stop the collector failed, because the JVM to be stopped would not exit.It seems likely that the crawler job entered a terminal state, but the code was waiting for it to stop logically, except that it had failed or something instead.
The exception that produced this state was:
For me, reducing the size of the elasticsearch
commitSize
resolved the problem, but still worth preventing job crawl hangs.