peterk87 / nf-villumina

Generic viral Illumina sequence analysis pipeline
MIT License
4 stars 5 forks source link

Pipeline execution error - Invalid method `call` #19

Open mnebroski opened 4 years ago

mnebroski commented 4 years ago

I have run into this error lately while running villumina and was just wondering what could be causing it. Is it an issue with the sample files themselves (the file sizes for this sample, and at least one other instance that I've noticed, are extremely small (< MB) when this occurs, but I've had other samples with similar sizes and it didn't seem to have the same error) or something else?

Invalid method `call` invocation with arguments: [16S-NSwab-19-05-0dpi, /home/bio/Desktop/Michelle/Reston_Bac_Fung_SPU/200525_G5553/work/c5/090
1b08c816d4fd8ecc51858aaa96a/16S-NSwab-19-05-0dpi_1.fastp.fastq.gz, /home/bio/Desktop/Michelle/Reston_Bac_Fung_SPU/200525_G5553/work/c5/0901b08c
816d4fd8ecc51858aaa96a/16S-NSwab-19-05-0dpi_2.fastp.fastq.gz, /home/bio/Desktop/Michelle/Reston_Bac_Fung_SPU/200525_G5553/work/c5/0901b08c816d4
fd8ecc51858aaa96a/16S-NSwab-19-05-0dpi-kraken2_results.tsv, /home/bio/Desktop/Michelle/Reston_Bac_Fung_SPU/200525_G5553/work/c5/0901b08c816d4fd
8ecc51858aaa96a/16S-NSwab-19-05-0dpi-kraken2_report.tsv, null] (java.util.ArrayList) on _closure69 type

Oops... Pipeline execution stopped with the following message: No signature of method: Script_5f286414$_runScript_closure20$_closure66$_closure
69.call() is applicable for argument types: (ArrayList) values: [[16S-NSwab-19-05-0dpi, /home/bio/Desktop/Michelle/Reston_Bac_Fung_SPU/200525_G
5553/work/c5/0901b08c816d4fd8ecc51858aaa96a/16S-NSwab-19-05-0dpi_1.fastp.fastq.gz, ...]]
Possible solutions: any(), any(), tap(groovy.lang.Closure), each(groovy.lang.Closure), any(groovy.lang.Closure), tap(groovy.lang.Closure) 
peterk87 commented 4 years ago

I think it might be too many instances of Centrifuge each using too much memory causing other processes to fail. You can set the memory requirements for Centrifuge to 128GB from the default (64GB I think) with a config file with the following:

process {
  withName: CENTRIFUGE {
    errorStrategy = 'retry'
    cpus = 28
    memory = 128.GB
  }
}

when running the workflow you can tell Nextflow to use the config file with -c /path/to/custom-villumina-workflow.config.

When running on the workflow on a Slurm cluster, the job would just get killed if they crossed the memory limit, but on a local machine, it'll just happily keep going until all memory is exhausted unfortunately.

It's a pain to have to manually set the memory requirements so I'm looking into how to dynamically set the memory requirements based on the total index size.