Closed Enorya closed 1 year ago
Hi @Enorya , my advice would be to pull the docker image from dockerHub on a computer where you have Docker installed, copy it on the cluster where you would like to run the pipeline with Singularity and convert it to a Singularity image. Then, in the config file, you can specify the path of the image on your cluster. You could try something like this:
//pull image from dockerHub
docker pull maestsi/metontiime:latest
//export image in img format
docker save -o metontiime.img maestsi/metontiime:latest
//copy metontiime.img file on the cluster
//scp metontiime.img user@cluster:/path/to/singularityCacheDir
//convert docker image to singularity image
singularity build metontiime.sif docker-archive://metontiime.img
//edit metontiime2.conf file
//container = 'maestsi/metontiime:latest' -> container = "/path/to/metontiime.sif"
Changing the singularity cache dir should also work, but there are also environment variables which may have different priority compared to singularity.cacheDir. I remember I had this issue too, and I had to do some troubleshooting to fix it.
Moreover, I would advise not to change the path from the guest to the host system, i. e. use something like:
containerOptions = '--bind /scratch/leuven/:/scratch/leuven'
I hope the docker -> singularity solution works, let me know if you succeed!
SM
Hi @MaestSi ,
The solution you proposed solved my issue, thanks for that!
However, I have another problem now. The path I gave for the working directory is not recognized. I gave the full path but all the commands that use this path are not working (like cp
or find
). I always end up with the following error message while running the nextflow pipeline:
Error executing process > 'concatenateFastq'
Caused by:
Process `concatenateFastq` terminated with an error exit status (1)
Command executed:
# mkdir -p /lustre1/project/stg_00026/enora/EMBRC-MinION/MetONTIIME/16S_combined_8c_100M_new-pip
# mkdir -p /lustre1/project/stg_00026/enora/EMBRC-MinION/MetONTIIME/16S_combined_8c_100M_new-pip/concatenateFastq
cp /lustre1/project/stg_00026/enora/EMBRC-MinION/raw_data/16S-COI_fastq/*fastq.gz /lustre1/project/stg_00026/enora/EMBRC-MinION/MetONTIIME/16S_combined_8c_100M_new-pip/concatenateFastq
Command exit status:
1
Command output:
(empty)
Command error:
INFO: Environment variable SINGULARITYENV_TMPDIR is set, but APPTAINERENV_TMPDIR is preferred
cp: cannot stat '/lustre1/project/stg_00026/enora/EMBRC-MinION/raw_data/16S-COI_fastq/*fastq.gz': No such file or directory
I checked by trying the commands myself in the terminal and it works when I use this path.
Do you have any idea why the path is not recognized? Am I doing something wrong?
Hi @Enorya , I think this is due to the fact that you forgot to mount the /lustre1 disk. To do that, at line 74 of metontiime2.conf file, please edit:
containerOptions = '--bind /home/:/home --bind /lustre1:/lustre1'
In this way, also /lustre1 disk should be accessible to Singularity. Best, SM
Hi @MaestSi ,
It worked, thanks for your help!
Best,
Perfect! Best, SM
Dear,
I'm trying to use your pipeline on a cluster (slurm) but each time I try to launch it, it terminates really quickly because there is a problem with the pull of the container (I'm working with singularity giving that I'm on a cluster). Each time I end up with the following error message:
Do you have any idea how to fix this? I think it's linked to a space limit in the cachedir directory so I already tried changing the singularity cachedir to a location where there is more space
and I changed the container option
but for a weird reason it doesn't work, it always go back to my user directory where the space is really limited. Do you know how I can change the path of the cachedir?
Thank you in advance for your help.