Closed dppss90008 closed 1 day ago
Hello,
Thank you for your question. I can update the resources for this step for the next update of the pipeline.
In the meantime, you can edit conf/base.config. In that file, you can edit the resources for process_high_memory from this:
withLabel:process_high_memory {
memory = { 200.GB * task.attempt }
}
to this (or more depending on your preference):
withLabel:process_high_memory {
cpus = { 20 * task.attempt }
memory = { 200.GB * task.attempt }
time = { 100.h * task.attempt }
}
Currently, the Trinity module uses 0.8 memory for process_high_memory (i.e. 0.8200 = 160GB).
Best, Avani
I have increased the resources for this step to the following:
withLabel:process_high_memory {
cpus = { 20 * task.attempt }
memory = { 320.GB * task.attempt }
time = { 200.h * task.attempt }
}
Please let me know if that these work you and the size of datasets that are using (e.g. number of samples, number of reads/sample).
Thanks!
Description of feature
Hello,
Thank you for developing this fantastic pipeline. I am currently trying to use it to process my RNAseq data (human).
This is the command I used:
However, Nextflow raised the following error:
I suspect the issue is caused by Trinity. I am wondering if using the following parameter might help:
Are there any other parameters I can modify to improve performance? I noticed that Trinity is only using 12 CPUs and 160 GB of memory. Could I allocate more memory or CPUs to speed up the process?
Thank you
Chih-Hung, Hsieh (CH)