Closed mzubal closed 8 years ago
Could you check out how the total thread count is over time?
It is visible in the bottom right corner on both screenshots: thread count remains practically constant over time.
I'm seeing the same behavior on win2k8r2 + logstash 1.5.4 + jre 1.8.0_60. file input and kafka output. going to try downgrading to logstash 1.5.0 and see if that fixes things.
EDIT: its better but I'm still seeing an inexorable climb in usage as reported by perfmon
This problem as been identified as a leak in jruby >= 1.7.20 that started shipping in 1.5.1 More info on this https://github.com/jruby/jruby/issues/3446
Thanks to all of you for digging this out. Are there any plans to fix this in 1.X or just 2.1?
@mzubal this will be backported to 1.5
Fixed in https://github.com/jruby/jruby/issues/3446. LS version 1.5.6 and 2.1.0
Hi, there have been several other bugs reports related to logstash taking too much memory. I encountered similar issues as well. Environment:
Running 1.5.1 and higher (I have tried all of them) causes the java process slowly taking more memory (as can be seen on screenshot below), ending up with GBs of memory consumed. The Java Heap is fine as can be seen from the screenshots - I suspect the native memory usage to be the problem here. I have tried to remove all filters and elasticsearch output (just keeping the inputs and stdout output) with the same result. On 1.5.0 this doesn't happen (as can be seen on screenshot below). So this leads me to conclusion, that something bad happened in 1.5.1 causing this behavior. I would be very glad if you checked that. Thanks!
logstash-1.5.0: logstash-1.5.3:
logstash config: