apptainer / singularity

Singularity has been renamed to Apptainer as part of us moving the project to the Linux Foundation. This repo has been persisted as a snapshot right before the changes.
https://github.com/apptainer/apptainer
Other
2.53k stars 424 forks source link

/tmp/.sinigularity-runtime.* not cleaned up for specific docker container #1255

Closed wresch closed 3 years ago

wresch commented 6 years ago

Version of Singularity:

2.4.1

Expected behavior

/tmp/.singularity-runtime.* should be cleaned up when exiting container. This only occurs for a specific docker container so I'm not entirely sure that this is a singularity bug.

Actual behavior

not actually cleaned up

Steps to reproduce behavior

$ singularity -vvv run 'docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0'
Increasing verbosity level (4)
Singularity version: 2.4.1-dist
Exec'ing: /usr/local/apps/singularity/2.4.1/libexec/singularity/cli/run.exec
Evaluating args: 'docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0'
VERBOSE2 SINGULARITY_COMMAND_ASIS found as False
VERBOSE2 SINGULARITY_ROOTFS found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0
VERBOSE2 SINGULARITY_METADATA_FOLDER found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d
VERBOSE2 SINGULARITY_FIX_PERMS found as False
VERBOSE2 SINGULARITY_COLORIZE not defined (None)
VERBOSE2 SINGULARITY_DISABLE_CACHE found as False
VERBOSE2 SINGULARITY_CACHEDIR found as /data/wresch/temp/singularity/cache
VERBOSE2 REGISTRY not defined (None)
VERBOSE2 NAMESPACE not defined (None)
VERBOSE2 SINGULARITY_DOCKER_ARCHITECTURE found as amd64
VERBOSE2 SINGULARITY_DOCKER_OS found as linux
VERBOSE2 SINGULARITY_ENVIRONMENT found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d/env/90-environment.sh
VERBOSE2 SINGULARITY_RUNSCRIPT found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/singularity
VERBOSE2 SINGULARITY_TESTFILE found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d/test
VERBOSE2 SINGULARITY_DEFFILE found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d/Singularity
VERBOSE2 SINGULARITY_HELPFILE found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d/runscript.help
VERBOSE2 SINGULARITY_ENVBASE found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d/env
VERBOSE2 SINGULARITY_LABELFILE found as /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/.singularity.d/labels.json
VERBOSE2 SINGULARITY_INCLUDECMD found as False
VERBOSE2 SINGULARITY_NOHTTPS found as False
VERBOSE2 SINGULARITY_PULLFOLDER found as /spin1/users/wresch/temp
VERBOSE2 SHUB_NAMEBYHASH not defined (None)
VERBOSE2 SHUB_NAMEBYCOMMIT not defined (None)
VERBOSE2 SHUB_CONTAINERNAME not defined (None)
VERBOSE2 SINGULARITY_CONTENTS found as /tmp/.singularity-layers.LLtcUe9H
VERBOSE2 SINGULARITY_PYTHREADS found as 9
VERBOSE2 SINGULARITY_CONTAINER found as docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0
VERBOSE2 SINGULARITY_DOCKER_USERNAME not defined (None)
VERBOSE Docker image: quay.io/biocontainers/vcflib:1.0.0_rc1--0
VERBOSE2 Specified Docker ENTRYPOINT as %runscript.
VERBOSE Registry: quay.io
VERBOSE Namespace: biocontainers
VERBOSE Repo Name: vcflib
VERBOSE Repo Tag: 1.0.0_rc1--0
VERBOSE Version: None
VERBOSE Obtaining tags: https://quay.io/v2/biocontainers/vcflib/tags/list
VERBOSE3 Response on obtaining token is None.
Docker image path: quay.io/biocontainers/vcflib:1.0.0_rc1--0
VERBOSE Obtaining manifest: https://quay.io/v2/biocontainers/vcflib/manifests/1.0.0_rc1--0
VERBOSE Obtaining manifest: https://quay.io/v2/biocontainers/vcflib/manifests/1.0.0_rc1--0
Cache folder set to /spin1/users/wresch/temp/singularity/cache/docker
VERBOSE3 Found Docker command (Entrypoint) None
VERBOSE3 Found Docker command (Cmd) [u'/bin/sh']
VERBOSE3 Adding Docker CMD as Singularity runscript...
VERBOSE3 Found Docker command (Env) [u'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin']
VERBOSE3 Found Docker container environment!
VERBOSE3 Adding Docker environment to metadata tar
VERBOSE3 Found Docker command (Labels) {}
VERBOSE3 Adding Docker labels to metadata tar
VERBOSE3 Adding Docker runscript to metadata tar
VERBOSE2 Tar file with Docker env and labels: /spin1/users/wresch/temp/singularity/cache/metadata/sha256:be3557811c54a148202bd45d9797a6d0f8fddb15b435fae1bdc1713d3cc84a36.tar.gz
VERBOSE3 Writing Docker layers files to /tmp/.singularity-layers.LLtcUe9H
VERBOSE2 Writing file /tmp/.singularity-layers.LLtcUe9H with mode w.
VERBOSE2 Writing file /tmp/.singularity-layers.LLtcUe9H with mode a.
Creating container runtime...
Importing: base Singularity environment
Exploding layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar.gz
Exploding layer: sha256:77c6c00e8b61bb628567c060b85690b0b0561bb37d8ad3f3792877bddcfe2500.tar.gz
Exploding layer: sha256:3aaade50789a6510c60e536f5e75fe8b8fc84801620e575cb0435e2654ffd7f6.tar.gz
Exploding layer: sha256:00cf8b9f3d2a08745635830064530c931d16f549d031013a9b7c6535e7107b88.tar.gz
Exploding layer: sha256:7ff999a2256f84141f17d07d26539acea8a4d9c149fefbbcc9a8b4d15ea32de7.tar.gz
Exploding layer: sha256:d2ba336f2e4458a9223203bf17cc88d77e3006d9cbf4f0b24a1618d0a5b82053.tar.gz
Exploding layer: sha256:dfda3e01f2b637b7b89adb401f2f763d592fcedd2937240e2eb3286fabce55f0.tar.gz
Exploding layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar.gz
Exploding layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar.gz
Exploding layer: sha256:10c3bb32200bdb5006b484c59b5f0c71b4dbab611d33fca816cd44f9f5ce9e3c.tar.gz
Exploding layer: sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar.gz
Exploding layer: sha256:58033fe7de58dccadfa101894b5cb68bc864d44b372c834ffcc944506e0c0d3b.tar.gz
Exploding layer: sha256:be3557811c54a148202bd45d9797a6d0f8fddb15b435fae1bdc1713d3cc84a36.tar.gz
VERBOSE: Set messagelevel to: 4
VERBOSE: Initialize configuration file: /usr/local/apps/singularity/2.4.1/etc/singularity/singularity.conf
VERBOSE: Got config key allow setuid = 'yes'
VERBOSE: Got config key max loop devices = '256'
VERBOSE: Got config key allow pid ns = 'yes'
VERBOSE: Got config key config passwd = 'yes'
VERBOSE: Got config key config group = 'yes'
VERBOSE: Got config key config resolv_conf = 'yes'
VERBOSE: Got config key mount proc = 'yes'
VERBOSE: Got config key mount sys = 'yes'
VERBOSE: Got config key mount dev = 'yes'
VERBOSE: Got config key mount devpts = 'yes'
VERBOSE: Got config key mount home = 'yes'
VERBOSE: Got config key mount tmp = 'yes'
VERBOSE: Got config key mount hostfs = 'no'
VERBOSE: Got config key bind path = '/etc/localtime'
VERBOSE: Got config key bind path = '/etc/hosts'
VERBOSE: Got config key user bind control = 'yes'
VERBOSE: Got config key enable overlay = 'try'
VERBOSE: Got config key mount slave = 'yes'
VERBOSE: Got config key sessiondir max size = '16'
VERBOSE: Got config key allow container squashfs = 'yes'
VERBOSE: Got config key allow container extfs = 'yes'
VERBOSE: Got config key allow container dir = 'yes'
VERBOSE: Initializing Singularity Registry
VERBOSE: Adding value to registry: 'LIBEXECDIR' = '/usr/local/apps/singularity/2.4.1/libexec'
VERBOSE: Adding value to registry: 'COMMAND' = 'run'
VERBOSE: Adding value to registry: 'MESSAGELEVEL' = '4'
VERBOSE: Adding value to registry: 'ROOTFS' = '/tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0'
VERBOSE: Adding value to registry: 'VERSION' = '2.4.1-dist'
VERBOSE: Adding value to registry: 'LOCALSTATEDIR' = '/usr/local/apps/singularity/2.4.1/var'
VERBOSE: Adding value to registry: 'CACHEDIR' = '/data/wresch/temp/singularity/cache'
VERBOSE: Adding value to registry: 'CONTENTS' = '/tmp/.singularity-layers.LLtcUe9H'
VERBOSE: Adding value to registry: 'SYSCONFDIR' = '/usr/local/apps/singularity/2.4.1/etc'
VERBOSE: Adding value to registry: 'BINDIR' = '/usr/local/apps/singularity/2.4.1/bin'
VERBOSE: Adding value to registry: 'CLEANUPDIR' = '/tmp/.singularity-runtime.8jgPtY3v'
VERBOSE: Adding value to registry: 'CONTAINER' = 'docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0'
VERBOSE: Adding value to registry: 'IMAGE' = '/tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0'
VERBOSE: Set home (via getpwuid()) to: /home/wresch
VERBOSE: Running SUID program workflow
VERBOSE: Checking program has appropriate permissions
VERBOSE: Checking configuration file is properly owned by root
VERBOSE: Checking if singularity.conf allows us to run as suid
VERBOSE: Invoking the user namespace
VERBOSE: Not virtualizing USER namespace: running as SUID
VERBOSE: Instantiating read only container image object
VERBOSE: Adding value to registry: 'CLEANUPD_FD' = '-1'
VERBOSE: Adding value to registry: 'CLEANUPD_FD' = '5'
VERBOSE: Found prior value for 'CLEANUPD_FD', overriding with '5'
VERBOSE: Exec'ing cleanupd thread: /usr/local/apps/singularity/2.4.1/libexec/singularity/bin/cleanupd
VERBOSE: Set messagelevel to: 4
VERBOSE: Checking input from environment: 'SINGULARITY_CLEANUPDIR'
VERBOSE: Obtained input from environment 'SINGULARITY_CLEANUPDIR' = '/tmp/.singularity-runtime.8jgPtY3v'
VERBOSE: Checking input from environment: 'SINGULARITY_CLEANUPTRIGGER'
VERBOSE: Obtained input from environment 'SINGULARITY_CLEANUPTRIGGER' = '/tmp/.singularity-cleanuptrigger.4DPjfZcP'
VERBOSE: Daemonizing cleandir cleanup process
VERBOSE: Not virtualizing IPC namespace on user request
VERBOSE: Not virtualizing PID namespace on user request
VERBOSE: Not virtualizing network namespace on user request
VERBOSE: Using session directory: /usr/local/apps/singularity/2.4.1/var/singularity/mnt/session
VERBOSE: Adding value to registry: 'SESSIONDIR' = '/usr/local/apps/singularity/2.4.1/var/singularity/mnt/session'
VERBOSE: Trying OverlayFS as requested by configuration
VERBOSE: Mounting overlay with options: lowerdir=/usr/local/apps/singularity/2.4.1/var/singularity/mnt/container,upperdir=/usr/local/apps/singularity/2.4.1/var/singularity/mnt/overlay/upper,workdir=/usr/local/apps/singularity/2.4.1/var/singularity/mnt/overlay/work
VERBOSE: Singularity overlay mount did not work (No such device), continuing without it
VERBOSE: Running all mount components
VERBOSE: Found 'bind path' = /etc/localtime, /etc/localtime
WARNING: Non existent bind point (file) in container: '/etc/localtime'
VERBOSE: Found 'bind path' = /etc/hosts, /etc/hosts
VERBOSE: Binding '/etc/hosts' to '/usr/local/apps/singularity/2.4.1/var/singularity/mnt/final//etc/hosts'
VERBOSE: Bind-mounting host /proc
VERBOSE: Mounting /sys
VERBOSE: Bind mounting /dev
VERBOSE: Mounting home directory source into session directory: /home/wresch -> /usr/local/apps/singularity/2.4.1/var/singularity/mnt/session/home/wresch
VERBOSE: Mounting staged home directory base to container's base dir: /usr/local/apps/singularity/2.4.1/var/singularity/mnt/session/home -> /usr/local/apps/singularity/2.4.1/var/singularity/mnt/final/home
VERBOSE: Mounting directory: /tmp
VERBOSE: Mounting directory: /var/tmp
VERBOSE: Not mounting CWD, directory does not exist within container: /data/wresch/temp
VERBOSE: Running file components
VERBOSE: Checking for template passwd file: /usr/local/apps/singularity/2.4.1/var/singularity/mnt/final/etc/passwd
VERBOSE: Creating template of /etc/passwd
VERBOSE: Creating template passwd file and appending user data: /usr/local/apps/singularity/2.4.1/var/singularity/mnt/session/passwd
VERBOSE: Binding file '/usr/local/apps/singularity/2.4.1/var/singularity/mnt/session/passwd' to '/usr/local/apps/singularity/2.4.1/var/singularity/mnt/final/etc/passwd'
VERBOSE: Creating template of /etc/group for containment
VERBOSE: Updating group file with user info
VERBOSE: Found supplementary group membership in: 10
VERBOSE: Adding user's supplementary group ('wheel') info to template group file
VERBOSE: Found supplementary group membership in: 12
VERBOSE: Adding user's supplementary group ('mail') info to template group file
VERBOSE: Found supplementary group membership in: 6990
VERBOSE: Adding user's supplementary group ('webcpu') info to template group file
VERBOSE: Found supplementary group membership in: 10484
VERBOSE: Adding user's supplementary group ('helixmon') info to template group file
VERBOSE: Found supplementary group membership in: 57823
VERBOSE: Adding user's supplementary group ('helixapp') info to template group file
VERBOSE: Binding file '/usr/local/apps/singularity/2.4.1/var/singularity/mnt/session/group' to '/usr/local/apps/singularity/2.4.1/var/singularity/mnt/final/etc/group'
WARNING: Bind file source does not exist on host: /etc/resolv.conf
VERBOSE: Containing all rootfs components
VERBOSE: Entering container file system root: /usr/local/apps/singularity/2.4.1/var/singularity/mnt/final
VERBOSE: Could not chdir to current dir: /data/wresch/temp
LOG    : USER=wresch, IMAGE='vcflib:1.0.0_rc1--0', COMMAND='run'
VERBOSE: Starting runscript

Singularity> exit

VERBOSE: Cleaning directory: /tmp/.singularity-runtime.8jgPtY3v
WARNING: Failed removing file: /tmp/.singularity-runtime.8jgPtY3v/quay.io/biocontainers/vcflib:1.0.0_rc1--0/tmp
ERROR  : Could not remove directory /tmp/.singularity-runtime.8jgPtY3v: Device or resource busy
ABORT  : Retval = 255

The really funny thing is that attaching strace to cleanupd after starting the container makes this behavior revert to expected:

$ singularity -vvv run 'docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0'
...
Singularity> 

in other shell:

$ ps -u $USER | grep cleanupd
28963 ?        00:00:00 cleanupd
$ strace -p 28963
Process 28963 attached                                     
flock(4, LOCK_EX

then exit the shell in the container

Singularity> exit
VERBOSE: Cleaning directory: /tmp/.singularity-runtime.Lzh40tHw

so now the runtime dir does get cleaned up as it should and the strace says

write(2, "VERBOSE: Cleaning directory: /tm"..., 68) = 68
[...snip...]
unlink("/tmp/.singularity-cleanuptrigger.GQb3ColH") = 0
exit_group(0)                           = ? 
+++ exited with 0 +++

odd, right?

cclerget commented 6 years ago

Hi @wresch , looks like a race condition between container exit and cleanupd. Cleanupd remove directory before container completely exit. strace slowing down traced process, that could explain why cleanupd has normal behaviour when traced.

Is this issue is annoying and requires a fix for 2.4.3 release ? Between that will be fixed in next major release.

wresch commented 6 years ago

Ah - i should have thought of that. Well this caused a bit of trouble on our our cluster - a user was running a toil pipeline with a bunch of docker containers and the leftover runtime directories of that one particular container ended up trashing /tmp on a bunch of nodes and it was made worse by a (now fixed) bug in our /tmp cleanup code. The final impact for us kind of depends on how many docker containers behave that way. The impact for the user is that he has to either find/build a different container or clean up manually. If it was me i'd say a race condition like that should probably be fixed in the next micro release but I don't know how long your to-do-list is.

cclerget commented 6 years ago

@wresch Could you test above PR to see if that fix issue ?

wresch commented 6 years ago

The 50ms delay seems to do the trick in this case:

Stock version - replicating faulty behavior

$ singularity -vvv run 'docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0'
....
LOG    : USER=wresch, IMAGE='vcflib:1.0.0_rc1--0', COMMAND='run'
VERBOSE: Starting runscript
Singularity> exit
VERBOSE: Cleaning directory: /tmp/.singularity-runtime.JtGQCkTC
WARNING: Failed removing file: /tmp/.singularity-runtime.JtGQCkTC/quay.io/biocontainers/vcflib:1.0.0_rc1--0/dev
ERROR  : Could not remove directory /tmp/.singularity-runtime.JtGQCkTC: Device or resource busy
ABORT  : Retval = 255

Patched version (patch #1265 ):

$ bin/singularity -vvv run 'docker://quay.io/biocontainers/vcflib:1.0.0_rc1--0'
...
LOG    : USER=wresch, IMAGE='vcflib:1.0.0_rc1--0', COMMAND='run'
VERBOSE: Starting runscript
Singularity> exit
VERBOSE: Cleaning directory: /tmp/.singularity-runtime.zcKMsRl2

Tested multiple times with runtime dir on local disk (/tmp), on NFS, and on GPFS (via setting SINGULARITY_LOCALCACHEDIR).

soichih commented 6 years ago

I am hit by this issue really hard. Most of our containers are giant (3-5G) and (probably because of the size??) singularity is failing to clean up the /tmp/.singularity- directories - which then causes our /tmp on our cluster nodes to run out of disk.

GodloveD commented 6 years ago

@soichih can you test that the release-2.4 branch fixes this issue for you please?

soichih commented 6 years ago

@GodloveD

I am still seeing this issue as of 2.4.2-dist

hayashis@karst(h2):~ $ singularity --version
2.4.2-dist

Does 2.4.2-dist contains the fix you mention?

GodloveD commented 6 years ago

Hi @soichih. No, 2.4.2-dist doesn't have it, but I believe we put it into the branch that is slated to become 2.4.3. Right now that is in release-2.4. Are you able to test that branch?

soichih commented 6 years ago

I don't have sudo access to our HPC cluster, so I've tried to recreate the problem on my Ubuntu dev VM using both 2.4.1-dist from neurodebian, and release-2.4 from this github repo (./configure && make install)

I've ran my containers many times on both versions but I couldn't recreate the problem on this VM.. When singularity exits, it successfully removes the .singularity-runtime.**** directory.

I will try repeating my test on another slurm cluster that I do have sudo access to.

I did notice, however, if I stop singularity while it's in "Creating container runtime..." stage, the .singularity-runtime directory will remain in /tmp directory. It's possible that the .singularity-runtime would be left in /tmp and pile up if 1) HPC kills the job (or preempted by other jobs, etc..) while it's in creating the runtime or 2) /tmp becomes full while creating container and singularity crashes.

Does cleanup not happen if singularity is killed while it's creating container runtime?

soichih commented 6 years ago

This issue is still happening after I upgraded singularity to 2.5.2-dist. I think it's probably related to the use of docker container singularity exec docker://somecontainer ..., but I am not sure.

Is there a way to capture the runtime directory path? If so, I can try adding "rm -rf /tmp/.singularity-runtime.$id" in our batch schedule epilogue to force it to be cleaned up.. Does anyone have any suggestion?

soichih commented 6 years ago

OK I did a bit more digging. The issue seems to be caused by cleanupd getting killed by batch schedulers after job timeout. Here is the sequence of events..

1) Job starts up singularity with some large container (>1G?) 2) PBS cluster detects walltime violation of the job. It sends SIGTERM to the singularity process. 3) singularity process dies - releasing cleanup trigger flock. cleanupd gets flock and proceed with cleanup. (I see "Cleaning directory: ..." message). 4) Soon after 3), PBS cluster also sends SIGTERM to cleanupd and cleanupd dies before it finishes cleaning. 5) /tmp is left with .singularity-runtime. (and .singularity-cleanuptrigger.)

Both PBS and slurm send SIGTERM followed by SIGKILL if a process won't die (by default 30 seconds after for slurm.. not sure on PBS). I am thinking that, if cleanupd is updated to handle away SIGTERM instead of just terminating, then it could give s_rmdir function more time to do its job before terminated by SIGKILL. (you should advise cluster admins to have long enough delay also..?)

Another related issue with cleanupd is that, the cleanupd starts up after runtime directory is finished being created. If a job is killed while exploding docker layers, /tmp is left with .singularity-runtime. and .singularity-layers. I believe cleanupd should be started before runtime directory is created (I am seeing this on 2.5.2-dist) - and made it to cleanup .singularity-layers as well as .singularity-runtime.

carterpeel commented 3 years ago

Hello,

This is a templated response that is being sent out to all open issues. We are working hard on 'rebuilding' the Singularity community, and a major task on the agenda is finding out what issues are still outstanding.

Please consider the following:

  1. Is this issue a duplicate, or has it been fixed/implemented since being added?
  2. Is the issue still relevant to the current state of Singularity's functionality?
  3. Would you like to continue discussing this issue or feature request?

Thanks, Carter

wresch commented 3 years ago

Didn't realize this was still open. As far as i'm concerned - i don't see any issues with cleanup in 3.7.3 with this container and we haven't encountered any more issues with singularity trashing /tmp in a long time. Other comments are 3 years old as well - i'll close.