nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
633 stars 293 forks source link

Removing temporary files generated by fmriprep #2075

Closed amyh101 closed 4 years ago

amyh101 commented 4 years ago

Hi all, thanks for a wonderful tool in neuroimaging!!

I am running fmriprep with docker desktop for mac and keep running into a docker error 'no space left on device.'

When the error occurs, it appears the docker VM has exceeded the allotted space while there is still some space left on the mac HD. I currently have 64 GB dedicated to docker which seems like it should be sufficient for running fmriprep on a single participant level. Tracking the usage of disk space, it appears every time I run fmriprep (both using the fmriprep-docker script and running directly with a docker command) there is an increase in data stored within the docker VM that is not removed before the next fmriprep command. For example....

Step (Size of docker image....)

  1. Pull fmriprep image (~22 GB)
  2. fmriprep executed - participant 0001 (~29 GB)
  3. fmriprep executed - participant 0002 (~36 GB)
  4. fmriprep executed - participant 0003 (~43 GB) ...

Eventually - I exceed the allotted disk space and receive the docker error message. I have tried using the '--clean-workdir' command implemented in newer versions of fmriprep but receive an error described here (#2074 )

Thanks in advance!

mgxd commented 4 years ago

hi @amyh101 - are you running all your participants within the same docker image? If so, I would suggest running a single participant per container, leveraging the --participant-id flag. Then you would be able to manually docker system prune between runs if necessary.

This post may also help clear up any Docker disk space problems.

amyh101 commented 4 years ago

Thanks for the advice, I am using the --participant-id flag, but am not using docker prune between runs. That seems to do the trick.