This repository uses both neurodocker and tcy to create a standardized Docker-Image for the Complex Systems in Psychiatry Lab. It includes most of the software that the CSP-members need (A conda environment with Python & R and a bunch of of cool libraries, SPM, Freesurfer, etc.)
If you just want to use the docker image, just can pull the latest version from Docker Hub. You can pull the image by running:
docker pull johanneswiesner/csp:x.x.x
(where you replace x.x.x
with latest currently available version).
Neurodocker is able to create .sif
files. However, you can also convert the Docker image to a .sif
file on the fly by running:
singularity pull csp.sif docker://johanneswiesner/csp:x.x.x
git clone --recurse-submodules https://github.com/JohannesWiesner/csp_docker.git
. This will automatically include the tcy
respository as a submodule.bash generate_dockerfile.sh
to create a Dockerfile using neurodocker. By default this will first run the tcy
submodule to create an environment.yml
file. This file will then be used to create a conda
environment within the Docker image with the standard packages for the CSP-members.docker build -t xxx:xxx .
docker run -t -i --rm -p 8888:8888 xxx:xxx
Because it can be tedious to always execute steps 2-4 while developing and because the creation of conda environments can take quite long, we included two more options:
.yml
file of your choice using the -y
option (e.g. bash generate_dockerfile.sh -y path/to/your/file.yml
).
We included a test.yml
file within this repository with a couple of packages that are mostly needed to run nipype-analyses serving as a MVP.-t
option
(e.g. bash generate_dockerfile.sh -t
). This will generate the Dockerfile, build the image and run it as a container while also mounting the subfolders of the included /testing
directory to it.download_test_data.sh
that you can use to download a functional and anatomical image from openneuro.org using openneuro-py
. Note that you must install openneuro-py
beforehand by following the installation instructions.generate_dockerfile.sh
and docker build
on a regular basis (preferably after every single edit). This is tedious but in our experience, too many edits at once make it hard to debug what went wrong. The neurodocker image is still under heavy development which means that it is not guaranteed that every combination of arguments that you pass to docker run -i --rm repronim/neurodocker:x.x.x generate docker
will lead to a bug-free Dockerfile.neurodebian:stretch-non-free
is quite old and we would wish to switch to a newer version of neurodebian. However, with newer base images a lot of bugs happen and software like SPM12 could not be installed using the neurodocker flags. (This is also tightly related to the first point, so make sure the image can be built and the container runs error free when using a different base image).--spm12
, which in theory should enable you to use any base image that you want). We are currently using a mixture of both options as we were unable to install everything with just neurodocker. The long-term goal is to switch to a newer (and slimmer) base image and to install everything what we need with only using the neurodocker flags.The manually_created
directory contains (as the name suggests) Docker files that were not created with neurodocker, but were written manually to bypass current issues that come with neurodocker.
In case you run into file-permission errors (e.g. can't create files in your mounted directories), it makes sense to pass your host id and group to the docker container. This can be done by adding the -u
option to docker run
. E.g.:
docker run ... -u $(id -u):$(id -g)
Using this option makes sure, that the user "inside the container" has the same user and group id as the host user. So whatever directories or files you created outside the container, they can now be manipulated by the container user.