I'm pretty sure hhblits and jackhammer and locked to ncpu=8 (or something) in the AlphaFold pipeline. So may as well start and end there.
Trillian, k095, also has multiples of 8 CPU per GPU. Will check for the incoming H200 nodes
Also a reasonably large system RAM. 125 GB per GPU?
Might already be handled by lines 70-80, since we basically always want people using the GPU. I don't think we should support CPU offloading for inference
https://github.com/Australian-Structural-Biology-Computing/proteinfold/blob/8ce05982ff57ba3722c4e87093307b68f9923d43/conf/katana.config#L18-L20
I'm pretty sure hhblits and jackhammer and locked to ncpu=8 (or something) in the AlphaFold pipeline. So may as well start and end there.
Trillian, k095, also has multiples of 8 CPU per GPU. Will check for the incoming H200 nodes
Also a reasonably large system RAM. 125 GB per GPU?
Might already be handled by lines 70-80, since we basically always want people using the GPU. I don't think we should support CPU offloading for inference
} withLabel:gpu_compute { queue = "${params.gpuQueue}" accelerator = 1 clusterOptions = { "-l select=1:ngpus=1:ncpus=${task.cpus}:mem=${task.memory.toMega()}mb" }