Open pypt opened 3 years ago
what do you think is a sensible upper bound for storage_opt?
Dunno, as I don't really know what does it exactly limit :) Is it a cap on container image's own files? Or the files that the container creates while running? Could you test it out for me? Also, what happens if the container exceeds that limit? Does it get killed, or it just can't write stuff anymore?
Generally containers aren't supposed to do much writing to their own root partitions (only to the volumes) while running, and our containers don't write much anywhere. Some exceptions:
elasticsearch-base
) specifies a temporary directory (https://github.com/mediacloud/backend/blob/b83eb056cdde7da12b345cc60c7f8b9bb168f74e/apps/elasticsearch-base/config/jvm.options#L10-L11) but I'm not sure what does it write there (if anything); could you SSH into servers that run images based on elasticsearch-base
(only elk-elasticsearch
for now), docker exec
into a running container and see what's out there in /var/tmp
?tempfile
and File::Temp
) here and there; could you grep for uses of those? Some of the users that I remember about is copy_from
and copy_to
which accommodate PostgreSQL's COPY
so the CSVs that are being copied from / to can get quite large sometimes, but then we still want to ensure that there aren't too many of themelk-filebeat
probably stores logs that it has collected somewhere. Could you check its usage too?If, say, storage_opt
limits the amount of data that gets written to a running container, maybe a good liberal upper cap could be 5 GB or something like that? Or 10 GB?
If you can, find out what does storage_opt
do exactly (and does it work at all), report back here, and then we'll figure out what we can do with it.
also, do you think the right place to put that is as part of the x-common-configuration section of apps/docker-compose.dist.yml, or would it be better to add it for only certain apps (e.g. extract-and-vector, our original problem child here) ?
All apps can decide to write things, so we'd be looking into adding a storage caps on all apps I'd think.
x-common-configuration
sets common environment variables on (most) services; I think storage_opt
gets set somewhere else.
If you can, find out what does storage_opt do exactly (and does it work at all), report back here, and then we'll figure out what we can do with it.
Looks like this is for setting the container's rootfs size at creation time: https://docs.docker.com/engine/reference/commandline/run/#set-storage-driver-options-per-container
From the docs:
This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size less than the backing fs size.
The problem is that it only works for overlay over xfs, and in our case we use ext4, so this isn't a compatible option in our case. Per our discussion earlier, I'm just gonna go ahead and fix the jieba cache issue and call it a day.
extract-and-vector
workers tend to fill up/var/tmp
with gigabytes of pretty much identical files which are of the size of either 0 or 3332489:It took me a while to notice that a temporary file with a random name and a temporary file with a not-so-random name have identical file sizes:
Jieba is a Python library which does Chinese language tokenization for us. Given that it uses a dictionary to do that, it has to pre-load some stuff:
https://github.com/mediacloud/backend/blob/04bc9c63b55a20ab4f08aed2bef599bf94cd7474/apps/common/Dockerfile#L139-L144
but it seems that the resulting
/var/tmp/jieba.cache
does not become accessible by the users as that file gets created withroot:root
owner and600
permissions while its users run asmediacloud:mediacloud
, so Jieba resorts to rebuilding that cache file on every call.@jtotoole, could you:
jieba.cache
's file permissions at build time so that Jieba library could access it; probably you just need to run that cache creation script with a different user inDockerfile
docker-compose.yml
where appropriate - you'll probably needstorage_opt
for that