lancachenet / monolithic

A monolithic lancache service capable of caching all CDNs in a single instance
https://hub.docker.com/r/lancachenet/monolithic
Other
726 stars 73 forks source link

Avoid logging related disk writes #118

Closed PaeNx closed 3 years ago

PaeNx commented 3 years ago

Problem

Currently the lancachenet/monolithic container causes periodic, very small disk writes (only a few kb), which mostly seem to be related to logging. These disk writes prevent HDD spindown during idle periods resulting in unneccesary power consumption.

One way to deal with this would be to store the log files on a tmpfs unfortunately the container does not expose all log files below the data directory on the host. The only log files accessible from the host are access.log and error.log. However analyzing the docker overlay filesystem for the container I found additional log files in the containers /tmp directory as well as a supervisord.log (see snippet below).

├── tmp
│   ├── heartbeat-stderr---supervisor-XXXXXXXX.log
│   ├── heartbeat-stdout---supervisor-XXXXXXXX.log
│   ├── nginx-stderr---supervisor-XXXXXXXX.log
│   ├── nginx-stdout---supervisor-XXXXXXXX.log
│   └── site.conf
└── var
    ├── lib
    │   └── nginx
    │       ├── body
    │       ├── fastcgi
    │       ├── proxy
    │       ├── scgi
    │       └── uwsgi
    └── log
        ├── nginx
        │   ├── access.log
        │   └── error.log
        └── supervisor
            └── supervisord.log

Possible Solution

To prevent unnecessary disk activity it would be nice if you could either:

MathewBurnett commented 3 years ago

Our main use case for the project is to support a lan party where over a 4-5 day period there is a constant pull from the cache from many concurrent users.

VibroAxe commented 3 years ago

As proto has said, most high performance caches are a) running for short periods of time or b) running off ssd's, any long running lancache instances will likely need some handling of the log files anyway to allow a file rotation.

Putting the logfiles onto tmpfs would mean they were lost on any issues with the container or upgrades which would loose the only data you have to parse and determine usage/errors

If you did want to remap these folders as tmpfs, i'd suggest extending the container and adding a few lines to the Dockerfile RUN to remount these folders onto tmpfs