splunk / docker-logging-plugin

Splunk Connect for Docker is a Docker logging plugin that allows docker containers to send their logs directly to Splunk Enterprise or a Splunk Cloud deployment.
Apache License 2.0
65 stars 25 forks source link

logging plugin consumes large amount of memory #50

Closed lybroman closed 5 years ago

lybroman commented 5 years ago

What happened: logging plugin consumes large amount of memory

Mem: 5498852K used, 2148392K free, 166208K shrd, 50612K buff, 938332K cached CPU: 63% usr 10% sys 0% nic 26% idle 0% io 0% irq 0% sirq Load average: 1.48 1.62 1.66 2/507 8054 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 3476 3459 root S 2575m 34% 1 53% /bin/splunk-logging-plugin

What you expected to happen: logging plugin should consume acceptable amount of memory

How to reproduce it (as minimally and precisely as possible): It happened after enabling the plugin for some time. We have several containers using this plugin to forward the logs to splunk instance.

Environment:

mhoogcarspel-splunk commented 5 years ago

I think this is same/similar to the issue ADDON-21861 that I reported internally in Splunk (with some more specific steps to repro this faster), just leaving a note here so I get updates here too, left a note on the internal issue too linking to here.

dbaldwin-splunk commented 5 years ago

@gp510 Since you are tracking also.

gp510 commented 5 years ago

The fix for this issue has been merged to the develop branch: https://github.com/splunk/docker-logging-plugin/pull/53

It was being caused by calling time.NewTicker in a fast iterating loop.

Before fix: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14845 root 20 0 709216 642488 5860 S 93.7 15.9 31:15.22 splunk-logging-

After fix: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7570 root 20 0 108240 104824 5540 S 7.9 2.6 6:37.93 splunk-logging-