Closed lybroman closed 5 years ago
I think this is same/similar to the issue ADDON-21861 that I reported internally in Splunk (with some more specific steps to repro this faster), just leaving a note here so I get updates here too, left a note on the internal issue too linking to here.
@gp510 Since you are tracking also.
The fix for this issue has been merged to the develop branch: https://github.com/splunk/docker-logging-plugin/pull/53
It was being caused by calling time.NewTicker in a fast iterating loop.
Before fix:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14845 root 20 0 709216 642488 5860 S 93.7 15.9 31:15.22 splunk-logging-
After fix:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7570 root 20 0 108240 104824 5540 S 7.9 2.6 6:37.93 splunk-logging-
What happened: logging plugin consumes large amount of memory
Mem: 5498852K used, 2148392K free, 166208K shrd, 50612K buff, 938332K cached CPU: 63% usr 10% sys 0% nic 26% idle 0% io 0% irq 0% sirq Load average: 1.48 1.62 1.66 2/507 8054 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 3476 3459 root S 2575m 34% 1 53% /bin/splunk-logging-plugin
What you expected to happen: logging plugin should consume acceptable amount of memory
How to reproduce it (as minimally and precisely as possible): It happened after enabling the plugin for some time. We have several containers using this plugin to forward the logs to splunk instance.
Environment:
docker version
): 17.06.0-cecat /etc/os-release
): Alpine Linux v3.5