chris-rock / bunyan-logstash-tcp

logstash tcp module for bunyan
21 stars 21 forks source link

Log stream memory heap #17

Open shaikatzir opened 9 years ago

shaikatzir commented 9 years ago

Sometimes my logstash server is crashing, preventing from the bunyan stream to send logs for few hours. It seems that every time it happens, the internal memory usage fills up quickly. I guessed it is caused by the bunyan, keeping the logs queued up before sending it to the server, but couldn't find any documentation of it. Any way to configure or check the size of he queue log?

libreninja commented 9 years ago

I'm also running into this issue. I noticed that npm install complains about the npm version, as we upgraded to npm 2.x.

chris-rock commented 9 years ago

does the bunyan client crashes as well? You could try to increase the buffer size: https://github.com/chris-rock/bunyan-logstash-tcp/blob/master/lib/logstash.js#L54

chris-rock commented 9 years ago

@libreninja I relaxed the npm dependency in the latest master release

shaikatzir commented 9 years ago

I am not sure if the bunyan client crashes. The question is why the memory keeps on growing without any limit? Is the buffer cyclic? does it deletes old messages?

chris-rock commented 9 years ago

we use a fixed cyclic buffer. therefore the memory should not increase due to the use of the buffer. see https://github.com/trevnorris/cbuffer

freddi301 commented 7 years ago

Hi, I recently experienced very slow responses from node because something was cluttering eventloop or connection. So i disabled bunyan-logstash-tcp and now it is ok. I think that the problem could be: if tcp stream does not connects to the logstash instance (eg: logstash server down) it somehow occupy node resources and ordinary request to node takes longer than 60sec (proxy timeout per request)

freddi301 commented 7 years ago

Hi, sorry for mistake, it was an unrelated issue. (mongo connection pool duplication)

konstantinkrassmann commented 7 years ago

We had the same issue. At 16 o'clock the elastic crashed and our clustered application wasnt available anymore.

bereich

At least, we should be able to catch that dropped logs and write them into a file or something