I use AmpliSeq on a HPC cluster. I utilize the nf-core conda environment for nextflow (22.10.1) and java (17.0.3). I keep getting a "no space left on device" error when the pipeline tries to pull and prepare the "quay.io-qiime2-core-2022.8.img" image. I saw in previous issues that this was either related to disk space in the singularity cache, or a missing docker scope (example, issue #26 and #28). However, after trying these suggestions I still get the same error. I have made sure that the singularity cache, image downloading directory, and anything else disk-space related is not of an issue here. I use my own config for running the pipeline on out HPC cluster through SLURM, which works fine. I am running the pipeline using the singularity profile.
Command used and terminal output
conda activate nf-core
nextflow run /cluster/projects/nn9305k/src/nf-core-ampliseq-2.4.1/workflow/main.nf -c /cluster/projects/nn9305k/nextflow/configs/saga_ampliseq.config --input "samplesheet.tsv" --FW_primer "TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCCTACGGGNGGCWGCAG" --RV_primer "GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAGGACTACHVGGGTATCTAATCC" --metadata "metadata.tsv" --metadata_category "condition" --outdir "ampliseq_all" --multiple_sequencing_runs --exclude_taxa "mitochondria,chloroplast,archaea" --min_frequency 2 --qiime_adonis_formula "condition" -work-dir $USERWORK/ampliseq -profile singularity
Error executing process > 'NFCORE_AMPLISEQ:AMPLISEQ:QIIME2_INSEQ (ASV_seqs.fasta)'
Caused by:
Failed to pull singularity image
command: singularity pull --name quay.io-qiime2-core-2022.8.img.pulling.1672995434452 docker://quay.io/qiime2/core:2022.8 > /dev/null
status : 255
message:
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob sha256:42c077c10790d51b6f75c4eb895cbd4da37558f7215b39cbf64c46b288f89bda
Copying blob sha256:1a23c9d790a34c5bb13dbaf42e0ea2a555e089aefed7fdfa980654f773b39b39
Copying blob sha256:22a6fc63b9b529f00082379be512f0ca1c7a491872396994cf59b47e794c5e09
Copying blob sha256:42b7f294ddbda82da5a69b0675429a15dba0766bd64bafb23d78f809c5de8b5a
Copying blob sha256:1ee3d7358a92f1712f27fc911035fac4651ad6b3f7c97da8cc38a3b78f5b074c
Copying blob sha256:e6062fa5f610cc620655ed8b2fb29958b3727f948528bc6a402e9de3922a92a1
Copying blob sha256:97eeee145658c1d01efaf2797bf58fa5a2ff10a93e12f000545da61332b491dd
Copying blob sha256:b5ca682aa46ae8c65f085739ab2b482f712bb8394c428774f8fa8eca86ee8cd3
Copying blob sha256:f243d33467c7dccdc960f779c896627b806c24930e555c031a50b4d0f7e2fab9
Copying blob sha256:6a4d753ac330f9bc7ecf4e77b9c4e44a4b93c4aaa1fe37fd585c1b419fbd0ad8
Copying blob sha256:1ad759e143f36f80d4ea718efc85b40a7d80b75818d9869e027263682c6e89c8
Copying blob sha256:83ab021118e2a67cf71929bea0b9cec8c0008705406ded76519f703876b35b01
Copying blob sha256:6c22f43930cb8d2bfa59b408c25d67f0ac8f9c803d2bc4b38393195c6c006157
Copying blob sha256:f8eac0b5854d0fc2929ca318afc25a7501c4fd3463ba0d36ed5242e1f3d34aff
Copying blob sha256:206e727c2a9c92d5417ea7191e25da7ff36d884a864027ed57e11c858319c372
Copying blob sha256:3d51d16b3fd67df4d938c7514279ebd51b62d17abc3aee75ca2e36e3fa87341b
Copying config sha256:636582997d9636e249957f5de4a5d4acc17863d030c99da8c1f3a0664455e773
Writing manifest to image destination
Storing signatures
2023/01/06 09:59:02 info unpack layer: sha256:42c077c10790d51b6f75c4eb895cbd4da37558f7215b39cbf64c46b288f89bda
2023/01/06 09:59:04 info unpack layer: sha256:1a23c9d790a34c5bb13dbaf42e0ea2a555e089aefed7fdfa980654f773b39b39
2023/01/06 09:59:05 info unpack layer: sha256:22a6fc63b9b529f00082379be512f0ca1c7a491872396994cf59b47e794c5e09
2023/01/06 09:59:08 info unpack layer: sha256:42b7f294ddbda82da5a69b0675429a15dba0766bd64bafb23d78f809c5de8b5a
2023/01/06 09:59:08 info unpack layer: sha256:1ee3d7358a92f1712f27fc911035fac4651ad6b3f7c97da8cc38a3b78f5b074c
2023/01/06 09:59:11 info unpack layer: sha256:e6062fa5f610cc620655ed8b2fb29958b3727f948528bc6a402e9de3922a92a1
2023/01/06 09:59:11 info unpack layer: sha256:97eeee145658c1d01efaf2797bf58fa5a2ff10a93e12f000545da61332b491dd
2023/01/06 09:59:11 info unpack layer: sha256:b5ca682aa46ae8c65f085739ab2b482f712bb8394c428774f8fa8eca86ee8cd3
2023/01/06 09:59:11 info unpack layer: sha256:f243d33467c7dccdc960f779c896627b806c24930e555c031a50b4d0f7e2fab9
2023/01/06 10:00:23 info unpack layer: sha256:6a4d753ac330f9bc7ecf4e77b9c4e44a4b93c4aaa1fe37fd585c1b419fbd0ad8
2023/01/06 10:00:23 info unpack layer: sha256:1ad759e143f36f80d4ea718efc85b40a7d80b75818d9869e027263682c6e89c8
2023/01/06 10:00:23 info unpack layer: sha256:83ab021118e2a67cf71929bea0b9cec8c0008705406ded76519f703876b35b01
2023/01/06 10:00:23 info unpack layer: sha256:6c22f43930cb8d2bfa59b408c25d67f0ac8f9c803d2bc4b38393195c6c006157
2023/01/06 10:00:23 info unpack layer: sha256:f8eac0b5854d0fc2929ca318afc25a7501c4fd3463ba0d36ed5242e1f3d34aff
2023/01/06 10:00:23 info unpack layer: sha256:206e727c2a9c92d5417ea7191e25da7ff36d884a864027ed57e11c858319c372
FATAL: While making image from oci registry: error fetching image to cache: while building SIF from layers: packer failed to pack: while unpacking tmpfs: error unpacking rootfs: unpack layer: unpack entry: opt/conda/envs/qiime2-2022.8/lib/python3.8/site-packages/pandas/tests/io/formats/style/test_format.py: create regular: unpriv.create: open /tmp/build-temp-693377526/rootfs/opt/conda/envs/qiime2-2022.8/lib/python3.8/site-packages/pandas/tests/io/formats/style/test_format.py: no space left on device
Description of the bug
I use AmpliSeq on a HPC cluster. I utilize the nf-core conda environment for nextflow (22.10.1) and java (17.0.3). I keep getting a "no space left on device" error when the pipeline tries to pull and prepare the "quay.io-qiime2-core-2022.8.img" image. I saw in previous issues that this was either related to disk space in the singularity cache, or a missing docker scope (example, issue #26 and #28). However, after trying these suggestions I still get the same error. I have made sure that the singularity cache, image downloading directory, and anything else disk-space related is not of an issue here. I use my own config for running the pipeline on out HPC cluster through SLURM, which works fine. I am running the pipeline using the singularity profile.
Command used and terminal output
Relevant files
nextflow.log
System information
Nextflow version: 22.10.1 Hardware: HPC Executor: SLURM Container engine: Singularity OS: Linux 3.10.0-1160.62.1.el7.x86_64 AmpliSeq version: 2.4.1