gcorso / DiffDock

Implementation of DiffDock: Diffusion Steps, Twists, and Turns for Molecular Docking
https://arxiv.org/abs/2210.01776
MIT License
1.06k stars 254 forks source link

Running inference on multiple GPUS #226

Open baksh97 opened 4 months ago

baksh97 commented 4 months ago

I have a multi-GPU machine and want to run DiffDock's inference on all of the GPUs. Is it currently possible?

jsilter commented 4 months ago

At this moment we have nothing built-in to make that easy. The thing to do would be to split your input table up into a few pieces and run each one in a separate process with CUDA_VISIBLE_DEVICES=\<gpu to run on>.

purnawanpp commented 4 months ago

I have same problems, can you explain why this come?