biocorecrg / master_of_pores

Nextflow pipeline for analysis of direct RNA Nanopore reads
https://biocorecrg.github.io/master_of_pores/
MIT License
93 stars 16 forks source link

run with PBS cluster error #81

Closed pl618 closed 4 years ago

pl618 commented 4 years ago

Hi,

It's fine when I run the test file locally, however, when use PBS profile, it seems can't bind the singularity image envirment.

Here is the error message:

executor > pbs (1) [3a/860a69] process > testInput (multifast5_1.fast5) [ 0%] 0 of 1 [- ] process > baseCalling - [- ] process > concatenateFastQFiles - [- ] process > QC - executor > pbs (1) [3a/860a69] process > testInput (multifast5_1.fast5) [100%] 1 of 1, failed: 1 ✘ [- ] process > baseCalling - [- ] process > concatenateFastQFiles - [- ] process > QC - [- ] process > fastQC - [- ] process > mapping - [- ] process > alnQC - [- ] process > joinAlnQCs - [- ] process > alnQC2 - [- ] process > multiQC - Skipping the email Error executing process > 'testInput (multifast5_1.fast5)'

Caused by: Process testInput (multifast5_1.fast5) terminated with an error exit status (1)

Command executed:

/usr/local/bin/fast5_type.py multifast5_1.fast5

Command exit status: 1

Command output: (empty)

Command error: Traceback (most recent call last): File "/usr/local/bin/fast5_type.py", line 3, in import h5py, ont_fast5_api ModuleNotFoundError: No module named 'ont_fast5_api'

Here is the profile setting in nextflow.global.config:

profiles {
    local {
            process.executor = 'local'
            process.cpus = 4
            process.memory = '12GB'
    }
    pbs {
            process.executor = 'pbs'
            process.queue = 'cpuq'
            process.cpus = 4
            process.memory = '10GB'
            process.time = '100000h'
    }
}
singularity.autoMounts = true
singularity.enabled = true

process {
  memory='12G'
  cache='lenient'
  container = '/home/singularity_app/nanopore_apps/preprocessing/preprocessing_v5.simg'
  containerOptions = { workflow.containerEngine == "docker" ? '-u $(id -u):$(id -g)': null}
  withLabel: big_cpus {
        cpus = 8
        memory = '12G'
  }
  withLabel: big_mem_cpus {
        cpus = 8
        memory = '20G'
  }
  withLabel: demulti {
        container = '/home//singularity_app/nanopore_apps/demulti/demulti.simg'
        cpus = 8
        memory = '20G'
  }
  withLabel: basecall_cpus {
        container = '/home/singularity_app/nanopore_apps/basecall/basecall.simg'
        cpus = 8
        memory = '5G'
  }
  withLabel: basecall_gpus {
        container = '/home/singularity_app/nanopore_apps/basecall/basecall_gpu.simg'
        cpus = 2 
        maxForks = 1
        containerOptions = { workflow.containerEngine == "singularity" ? '--nv':
           ( workflow.containerEngine == "docker" ? '-u $(id -u):$(id -g) --gpus all': null ) } 
  }
  withName: multiQC {
    container = '/home//singularity_app/nanopore_apps/preprocessing/multiqc.simg'
    cpus = 4
      memory = '4G'
  }
  withName: testInput {
    container = '/home//singularity_app/nanopore_apps/basic_bin_py3/basic_bin_py3.simg'
    cpus = 4
      memory = '4G'
  }
}
lucacozzuto commented 4 years ago

Hi. Not sure what are you doing :) So you are downloading the images by yourself and changing the names?

Why you are using a different image for testInput?

L

pl618 commented 4 years ago

The images works locally but not PBS cluster. I have tested both downloaded images and images made by myself. They behave the same way. Downloading guppy package needs accounts of nanopore, but using these codes can install too. So I can make a singularity image to run guppy without download it.


apt-get update
    apt-get install -y wget lsb-release apt-transport-https python3-setuptools curl locales
    export PLATFORM=$(lsb_release -cs)
    wget -O- https://mirror.oxfordnanoportal.com/apt/ont-repo.pub | apt-key add -
    echo "deb http://mirror.oxfordnanoportal.com/apt ${PLATFORM}-stable non-free" | tee /etc/apt/sources.list.d/nanoporetech.sources.list
    apt-get update
    apt install -y ont-guppy-cpu

Have you run master of pores with PBS cluster or other clusters and with singularity images?

lucacozzuto commented 4 years ago

Yes I use singularity images on our HPC. They are made on the fly by Nextflow converting our docker images from dockerhub using the Nextflow -with-singularity option. They are then stored in a folder named singularity and can be reused. But the problem you have is different. You are making a new image called basic_bin_py3.simg that is giving you errors. I recommend you to use our images, they are already tested.

pl618 commented 4 years ago

OK, thanks a lot. I'll try it later.