Closed mbhall88 closed 5 years ago
The main thing this affects is how the job runs on the cluster. Your PRGs with the de novo variants added will require a lot more memory than (for example) the tb or kpne PRGs I generated. What about scaling memory with the cube of the task attempt instead? Would get there quicker for you, without negatively affecting the memory efficiency rating on the cluster for those PRGs which are really quick. Alternatively, set a nextflow parameter as this memory start point, and pass in the higher value where it's needed?
Ok. Have left it at 0.1GB and changed to cubic scaling.
I find that a lot of my PRG jobs max out the memory limit, even on the 10th attempt. Starting with 0.5GB seems to work for all of my 1800 jobs I just ran so I guess thats a good default?