Closed Zeinab5 closed 6 years ago
Hi @Zeinab5,
Yes, we've seen similar errors before with very large RNA seq inputs - how many reads do your files contain?
@jun-wan - is this the same error we saw?
Phil
@ewels yes, usually we have the memory issue for STAR, featureCounts and dupRadar with >30M read pairs.
Jun
If the pipeline requires more memory for these processes, could this somehow be manually changed from the hebbe_config file? Or from an additional argument in the running script command? I am asking since I got this message:
When I encountered the same problem before, only when running 2 samples I got this same error:
I by-passed it by requesting 2 nodes and the pipeline was able to run. So, should I do the same thing for another study of 12 samples (increasing the number of nodes) If yes, how much? or should I use the suggested parameter?
Going by the error message, I bet we could help by specifying --limitBAMsortRAM
with $task.memory
(to dynamically supply the available memory).
This wouldn't help featureCounts or dupRadar though..
Hi @Zeinab5,
We've just released a new version of this pipeline over at nf-core/rnaseq. In this release we've tried to optimise the way in which large BAM files are handled and have hopefully added some fixes for the problems you've seen here.
I'll be archiving this repository soon, so if you could grab a version of the newly renamed pipeline and give it a go that would be great! If you see any problems, please create an issue on that repo.
Cheers,
Phil
Hello,
After processing some samples I got this error:
Thank you, Zeinab