Closed hkclark closed 4 years ago
@hkclark I wonder if the problem is in 'graygelf' or in whatever is running on that graylog
. Can you try temporarily shipping your logs to Sematext to see if you still hit this problem?
@oba11 Thanks for the reply. Yes, graygelf is being used... I'm installing that into the standard logagent Docker container with npm. Is the suggestion to send to Sematext using the default:
output:
logsene:
module: elasticsearch
url: $LOGSENE_RECEIVER_URL
...
Or to use the GELF output module to send to Sematext (if so I didn't know that was an option). However, if I switch to logsene/elasticsearch I won't get this issue -- only only when I'm using the "output-gelf" plugin.
I think you can ship GELF logs to the "socket receiver" for JSON listed at https://sematext.com/docs/logs/sending-log-events/
@otisg I just submitted a Pull Request that fixes it for me. I think the graygelf object should be created once in the constructor vs creating one every time the eventHandler fires. As I mentioned in the PR, let me know if I can do anything to assist. Thanks!
See PR #222
Closing this as it was fixed in #222.
@hkclark Hey! I've released 3.0.32
with this fix. Feel free to try it out!
Thanks! Also saw the update on Dockerhub and we are pulling that image. Thanks again for all the help!
Awesome! Pleasure was all mine.
We have been trying to use the output-gelf plugin and have noticed after 1-2 hours we get these exceptions and logagent stops working:
We are running this in the standard Docker Hub container (
sematext/logagent:latest
) where we have a second stage in the build process to have itnpm install -g --unsafe-perm graygelf
to satisfy the requirement forgraygelf
, but otherwise it's the standard container.I have noticed if I
docker-compose exec
to the container and watchnetstat -tupan
that the number of UDP ports in use steadily grows until the exceptions start happening.