Closed minhtrung1997 closed 1 year ago
Hi @minhtrung1997 , This is not actually an error that require changes in the pipeline. By design this values are really made to be minimal and try to use the smallest amount possible.
If in need for custom values, you can always customize your run by passing a custom .config
file as show here: https://nf-co.re/docs/usage/configuration#tuning-workflow-resources.
For example, to give more memory, or an specific amount of memory to a module you can create a config file and pass it with -c
: --> nextflow run fmalmeida/bacannot -c custom.config <other params>
.
process {
withName: DIGIS {
memory = 200.GB
}
}
The pipeline also tries to increase the amount of resources in retries until a max allowed for some specific errors with: --max_time
, --max_memory
and --max_cpus
.
Did the solution before helped? Or is it something we still need to take a look in more detail?
Thank you, we can run code seamlessly now thank to experience :))
Hi Dev team Thank for your graceful tool I've contacted you for the previous bug and is testing to see if it's solved Anyway, while I'm testing on slurm (process.executor = 'slurm'), I've recognized a short in label of resources As you can see, some modules require more time and ram than the label specify This will not replicate on local run, as local run does not need precise resource specify than using slurm! Hope to get the improvement soon out_of_memory.REFSEQ_MASHER.txt [Timeout.GET_NCBI_PROTEIN.txt](https://github.com/fmalmeida/bacannot/files/10434614/Timeout.GET_NCBI_PROTEIN.txt)
MemoryError.DigIS.txt