MaestSi / UMInator

A Nextflow pipeline for generating consensus sequences from Nanopore reads tagged with UMIs
GNU General Public License v3.0
5 stars 2 forks source link

Running test data on server with singularity #6

Closed laulambr closed 3 months ago

laulambr commented 3 months ago

Hi!

I tried to run your pipeline on the Karst test data. Am I doing something wrong? I used the following command but ran into an error with the singularity image (see below):

nextflow -c UMInator.conf run UMInator.nf --FW_adapter=CAAGCAGAAGACGGCATACGAGAT --RV_adapter=AATGATACGGCGACCACCGAGATC --FW_primer=AGRGTTYGATYMTGGCTCAG --RV_primer=CGACATCGAGGTGCCAAAC --fastq_files=/scratch/gent/430/vsc43097/UMInator/test_data/test_reads.fastq --results_dir=/scratch/gent/430/vsc43097/UMInator/test_output --minQ=7 --minLen=3000 --maxLen=6000 --min_UMI_freq=10 --scripts_dir=/scratch/gent/430/vsc43097/UMInator/scripts --medaka_model=r941_min_high_g330 -profile singularity -with-report`

Error message:

Workflow execution completed unsuccessfully!
The exit status of the task that caused the workflow execution to fail was: null.

The full error message was:

Error executing process > 'readsFiltering (1)'

Caused by:
  Failed to pull singularity image
  command: singularity pull  --name maestsi-uminator-latest.img.pulling.1712915481889 docker://maestsi/uminator:latest > /dev/null
  status : 255
  message:
    INFO:    Converting OCI blobs to SIF format
    WARNING: 'nodev' mount option set on /kyukon/scratch, it could be a source of failure during build process
    INFO:    Starting build...
    Getting image source signatures
    Copying blob sha256:f754d3577455705c0e50b2675ae345db330692767d2f493f96ae39d18b266820
    Copying blob sha256:c256cb8a03f53bc258cce8c545ce0d4fd1f17cf695dfd11aa550f602a020e181
    Copying blob sha256:96470ebef4adc6521c81f93d2437e182de590faa6efdcc38a83484b502f46d6c
    Copying blob sha256:a2abf6c4d29d43a4bf9fbb769f524d0fb36a2edab49819c1bf3e76f409f953ea
    Copying blob sha256:19249a42e74b53bb4d0822bcfd51e346916bed75763a3d623674c0dc9b6b9d8e
    Copying blob sha256:0ce0c224b7f8aee64f9b87eefba859d7ceb5b4823c24909be98e73526f21672f
    Copying blob sha256:34e119d392c3f6268d326c1d6357f7eab8b2560d09891d99e1dd5f4c69442709
    Copying blob sha256:4a3ab7f831542bd98a1d67fb6875a8d38c649c09daa54c4810c5fce5e72b9cce
    Copying blob sha256:5f6b043445a974a9e363e54964d8399e72e347cb9b03ef052e69f2a6fef48861
    Copying blob sha256:d6c2ef1b6036464f900498de08a1e0f08149c7708c7bcdde88b18a2231bda43e
    Copying blob sha256:1dc6b455597870f2f4a648d614d8cd28c7969ea255ac4d451e19cf073e8ee8af
    Copying blob sha256:575f3c23640904b9b9a8eaa1c19905de48372b39123b8299454f93d384f75e99
    Copying blob sha256:f5dd5ba3c82c533b36ddbdd249127bfb75b2dd5bcda9fb65a790150afa375577
    Copying blob sha256:a261d4dcd422755037d6f458ccd4739d7d1406944c27571153853648dc89c440
    Copying blob sha256:dd2cd5565526b5549f931bc572245cdc69f45a5323dc0ad3006397b3df3d01f2
    Copying blob sha256:7b2b7f15a03bff887195059c91bb7caca7399d29e46cb2850e5d4898bf0ae22f
    Copying blob sha256:10bdc8956bf87c8b891ed16e3f08467c00cad4d6053f7f3af1586d4acb189d58
    Copying blob sha256:555b89af91bde0bac832b0a9728bb80913cf2243dedfd714b659bab3e6b636c0
    Copying blob sha256:183c6087402cefda19b2c1532ac29bcf88b705c1a4577912895c44a1bba498d9
    Copying blob sha256:02fb1affab449ab168992952391e31cf637b567aa7a36dbeff3d40c09f6ba77d
    Copying blob sha256:b4142e903e1a96b4227e74ec3d9067bcaa1f9bd4e900d014e65578c2e2a25add
    Copying blob sha256:4a04d9e100eb077209cbcafccc141487641de6eef23007e0ed5907e740527afc
    FATAL:   While making image from oci registry: error fetching image to cache: while building SIF from layers: conveyor failed to get: while fetching image: initializing source oci:/user/gent/430/vsc43097/.apptainer/cache/blob:75b28d7e19811c68f2db930925364127331f419e49b8b6cf8a6065a87647a4fe: writing blob: write /user/gent/430/vsc43097/.apptainer/cache/blob/oci-put-blob440533092: disk quota exceeded
MaestSi commented 3 months ago

Hi, it looks like the error is due to the fact that a connection error prevented the successful download of singularity image from DockerHub. You should try again downloading the image, either re-running the command as you did, or trying to download it manually and save the img file to singularity_cache_dir with: singularity pull --name maestsi-uminator-latest.img docker://maestsi/uminator:latest Let me know if it works. SM

laulambr commented 3 months ago

Yeah, the HPC server I am using had some restrictions put in place but I have managed to create a succesfull by doing something similar as you suggested. Thank you!

However, when I run the code below I now run into a different issue shown below during the readsFiltering stage:

ERROR ~ Error executing process > 'readsFiltering (1)'
Caused by:
  Failed to submit process to grid scheduler for execution
Command executed:
  qsub -N nf-readsFilteri .command.run
Command exit status:
  1
Command output:
  ERROR: An unsupported option was used: -q

Work dir:
  /kyukon/scratch/gent/vo/001/gvo00120/UMInator/work/5b/9c1b4026e2de486f82a51f1f7090bc
Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line
 -- Check '.nextflow.log' file for details

nextflow -c UMInator.conf run UMInator.nf \
--FW_adapter="CAAGCAGAAGACGGCATACGAGAT" \
--RV_adapter="AATGATACGGCGACCACCGAGATC" \
--FW_primer="AGRGTTYGATYMTGGCTCAG" \
--RV_primer="CGACATCGAGGTGCCAAAC" \
--fastq_files=/scratch/gent/vo/001/gvo00120/UMInator/test_data/test_reads.fq \
--results_dir=/scratch/gent/vo/001/gvo00120/UMInator/test_output \
--minQ=7 \
--minLen=3000 \
--maxLen=6000 \
--min_UMI_freq=10 \
--scripts_dir=/scratch/gent/vo/001/gvo00120/UMInator/scripts \
--medaka_model=r941_min_high_g330 \
-profile singularity -with-report ``` 
MaestSi commented 3 months ago

Hi, I think the issue is due to the fact that you are not using a queue management system (like pbspro, slurm) etc, but the default option was set to slurm (I just updated the config file to have local as default). If this is the case, in UMInator.conf file, you should edit line 112 to: executor = 'local' // set to 'local' if you are not using a queue management system Let me know if this solves the issue. Best, SM

MaestSi commented 3 months ago

Hi, I'm going to close the issue due to inactivity. Feel free to reopen it, in case you have any further issues. SM