Closed cross closed 5 years ago
Indexer never kills itself, that would be just funny. Sounds like Linux OOM killer to me. https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9 gets the top hits in the search engine I am using. If you want to be sure who killed the process, there are some options: https://stackoverflow.com/questions/726690/what-killed-my-process-and-why but it seems dtrace, System tap or the like would get the answer right away.
Thanks. I knew OOM killer might be involved, but didn't know how to trace it. I see now that that's what was happening. Thank you, we can close this. Cause found. 👍
Possibly related to #2798 , with many gigabytes across 12 repositories within my opengrok source dir, I am recently seeing the indexing job run for about 45 minutes, then die.
Console shows:
note, there was about 15 minutes between the last timestamped log line and the
Killed
and the process exiting.I am reporting this now, because while trying to investigate, I pruned my source tree down to 6 repositories/projects. Now, the output is much different, suggesting the problem will not occur:
Is there just an issue where I have too many repositories? It is disk allocation, or memory?
I have 16GB in this system, and am running with
-Xms4g -Xmx14g
. I have at points in the past gotten errors about running out of memory, but am not seeing any of those recently. Is there any way to tell what is killing these jobs?(I'm running on a virtual x86_64 system, Ubuntu 18.04)