danpopHP / logstash

Automatically exported from code.google.com/p/logstash
0 stars 0 forks source link

OOM issue in logstash with tcp input #10

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Create logstash server with 4 tcp input and mongodb output.
2. Run it for a while.
3. We have seen oom erro in system.

I, [2011-01-13T21:13:14.210438 #13633]  INFO -- logstash: Starting tcp
listener for tcp://0.0.0.0:5565/
I, [2011-01-13T21:13:14.211069 #13633]  INFO -- logstash: Starting tcp
listener for tcp://0.0.0.0:5578/
I, [2011-01-13T21:13:14.211703 #13633]  INFO -- logstash: Starting tcp
listener for tcp://0.0.0.0:5568/
I, [2011-01-13T21:13:14.212318 #13633]  INFO -- logstash: Starting tcp
listener for tcp://0.0.0.0:5569/
tcmalloc: large alloc 1499643904 bytes == (nil) @
tcmalloc: large alloc 2699038720 bytes == (nil) @
tcmalloc: large alloc 4857946112 bytes == (nil) @
tcmalloc: large alloc 8743981056 bytes == (nil) @
tcmalloc: large alloc 15738843136 bytes == (nil) @
tcmalloc: large alloc 28329598976 bytes == (nil) @
tcmalloc: large alloc 50992955392 bytes == (nil) @
What is the expected output? What do you see instead?

What version of the product are you using? On what operating system?
Centos 5.4 x86_64

Please provide any additional information below.

Original issue reported on code.google.com by goafter1...@gmail.com on 18 Jan 2011 at 2:40

GoogleCodeExporter commented 8 years ago
Sorry you're having issues. I can try to reproduce this, but there's not a lot 
of data here to go on.

How much data were you sending and how long did it take?
Can you share your logstash config?

Original comment by jls.semi...@gmail.com on 19 Jan 2011 at 8:49

GoogleCodeExporter commented 8 years ago
Below is my logstash config:
inputs:
  m_production:
  - tcp://0.0.0.0:5565/
  tp_staging: # each input must have a type, the type can be anything.
  - tcp://0.0.0.0:5579/
  tm_staging:
  - tcp://0.0.0.0:5578/
  tm_production:
  - tcp://0.0.0.0:5568/
  tp_production:
  - tcp://0.0.0.0:5569/

outputs:
  #- elasticsearch://localhost:9200/logstash/logs
  - mongodb://localhost/dblog

We send about 1.5 GB log information across these five TCP pipe and store them 
in mongodb. It takes us one day to have this oom issue.

Original comment by goafter1...@gmail.com on 20 Jan 2011 at 2:06

GoogleCodeExporter commented 8 years ago
thanks for the details! I'll try to reproduce :)

Original comment by jls.semi...@gmail.com on 20 Jan 2011 at 3:30

GoogleCodeExporter commented 8 years ago
Set target for 1.0, assuming we can reproduce or figure out why this is 
happening.

Original comment by jls.semi...@gmail.com on 9 Feb 2011 at 2:21

GoogleCodeExporter commented 8 years ago
Can you try something? Run logstash for a few hours (beforey ou know you'll 
going to hit OOM) and then send SIGUSR1 and attach the full output here?

It should look something like this:

I, [2011-02-08T18:21:48.784445 #29863]  INFO -- logstash: Dumping counts of 
objects by class
...
I, [2011-02-08T18:21:48.791249 #29863]  INFO -- logstash: Class: [750] Float
I, [2011-02-08T18:21:48.791296 #29863]  INFO -- logstash: Class: [765] Hash
I, [2011-02-08T18:21:48.791344 #29863]  INFO -- logstash: Class: [1137] 
MatchData
I, [2011-02-08T18:21:48.791390 #29863]  INFO -- logstash: Class: [1204] Proc
I, [2011-02-08T18:21:48.791438 #29863]  INFO -- logstash: Class: [14988] Array
I, [2011-02-08T18:21:48.791484 #29863]  INFO -- logstash: Class: [49622] String
...

Original comment by jls.semi...@gmail.com on 9 Feb 2011 at 2:23

GoogleCodeExporter commented 8 years ago

Original comment by jls.semi...@gmail.com on 9 Feb 2011 at 2:25