Minimum requirement to run PNL pipeline through containers
It might be useful in general for people to know how containers can be deployed either on a single machine or in distributed environments. Something like:
Single machine
4 cores, 16G of RAM, ability to run docker or singularity, 50G for the container image
20G/case of storage space.
Running the full pipeline on one case takes X hours (Y hours for structural masking+freesurfer, Z hours for dMRI masking+artifact correction+ukf+white matter analysis).
Distributed environment:
A Linux cluster with a job scheduler (LSF or SLURM)
The ability for each compute node to run docker or singularity containers
20G/case of network attached storage
Munich collaborators have this equivalent to the above.
Minimum requirement to run PNL pipeline through containers
It might be useful in general for people to know how containers can be deployed either on a single machine or in distributed environments. Something like:
Single machine
Distributed environment:
Munich collaborators have this equivalent to the above.