Closed jkosinski closed 7 months ago
I think you can reduce alphafold_inference_threads in config.yaml
Ialphafold_inference_threads : 8 alphafold_inference : > gres=gpu:1 partition=gpu-el8 qos=high constraint=gpu=3090
such that for alphafold_inference_threads=1 it starts with 128*1/8 = 16gb
additionally, it works best if I put qos=low and remove gpu constrain at all, but of course no need to change it in git
Hmm, ok, I thought it would have adverse effect of changing the number of cpus/tasks run but it seems to have no such effect. So maybe just rename this to sth more intuitive?
Changing Ialphafold_inference_threads would also change the number of CPUs. Just adding the base memory usage as argument to the config.yaml and setting the default to whatever we use in AP should work fine.
Edit: See 66003868bb41d02cb7bcf2f51645282e7ca9f9cd
Perfect, thanks! Not sure what defaults are optimal, but we can start with what you put in.
Here: https://github.com/KosinskiLab/AlphaPulldownSnakemake/blob/fab6fbb7b110f6dc8cf43a9f2ea5f5406f437d3a/workflow/Snakefile#L272 the starting memory is 128000 which might be too high. For example, on our cluster depending on a node it can leave GPUs idle because the RAM of the nodes is exhausted. Also it can limit how many jobs you can run in total because of the cluster user limits. For many p-p screens starting with sth like 32000 or 64000 would be sufficient. Maybe add this starting value to user defined parameters?