Open baksh97 opened 4 months ago
At this moment we have nothing built-in to make that easy. The thing to do would be to split your input table up into a few pieces and run each one in a separate process with CUDA_VISIBLE_DEVICES=\<gpu to run on>.
I have same problems, can you explain why this come?
I have a multi-GPU machine and want to run DiffDock's inference on all of the GPUs. Is it currently possible?