Container should hover at a certain appropriate memory usage level and not keep increasing its memory usage until device RAM is full.
Current Behavior
Container starts leaking memory immediately after startup. After 14 minutes uptime, memory usage was up at 8 GB already. Process list inside container reveals a python process with about 60 MB RAM and a three s6-* processes with negligible memory usage, so the issue is not rooted in the guest software.
Steps to Reproduce
Start container in mentioned environment
Watch container memory usage grow nonstop.
Environment
Synology NAS, model DS1819+
CPU: Intel Atom C3538
RAM: 32 GB
Docker package v18.09.0-0506 (latest available version), contains Docker daemon v18.09.6
Command used to create docker container (run/create/compose/screenshot)
Environment: PUID/PGID/TZ properly set
Port passthroughs: 8000->8000, 7227->7227, 9666->9666 (you might want to add the latter as a default exposed port, it's for Click'n'Load functionality)
Volume mounts: /config and /downloads properly mapped to shares on the NAS
Docker logs
A lot of duplicated log output from pyload (it already writes its log into a file within the mounted config directory)
How to fix
What appears to work for me is modifying the startup script in /etc/services.d/pyload/run. I have redirected STDOUT and STDERR to the null device (assumed Docker perhaps collects any output from there and stores it in memory instead of writing it to a file). Now my container's RAM usage hovers between 900 MB and 1.75 GB but does not grow further.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Expected Behavior
Container should hover at a certain appropriate memory usage level and not keep increasing its memory usage until device RAM is full.
Current Behavior
Container starts leaking memory immediately after startup. After 14 minutes uptime, memory usage was up at 8 GB already. Process list inside container reveals a python process with about 60 MB RAM and a three s6-* processes with negligible memory usage, so the issue is not rooted in the guest software.
Steps to Reproduce
Environment
Synology NAS, model DS1819+ CPU: Intel Atom C3538 RAM: 32 GB Docker package v18.09.0-0506 (latest available version), contains Docker daemon v18.09.6
Command used to create docker container (run/create/compose/screenshot)
Environment: PUID/PGID/TZ properly set Port passthroughs: 8000->8000, 7227->7227, 9666->9666 (you might want to add the latter as a default exposed port, it's for Click'n'Load functionality) Volume mounts: /config and /downloads properly mapped to shares on the NAS
Docker logs
A lot of duplicated log output from pyload (it already writes its log into a file within the mounted config directory)
How to fix
What appears to work for me is modifying the startup script in /etc/services.d/pyload/run. I have redirected STDOUT and STDERR to the null device (assumed Docker perhaps collects any output from there and stores it in memory instead of writing it to a file). Now my container's RAM usage hovers between 900 MB and 1.75 GB but does not grow further.