diffusion-hyperfeatures / diffusion_hyperfeatures

Official PyTorch Implementation for Diffusion Hyperfeatures, NeurIPS 2023
85 stars 9 forks source link

Running on multiple gpus #8

Open Maram-Helmy opened 6 months ago

Maram-Helmy commented 6 months ago

Thank you for your work!

How can I run this code on multiple gpus?

g-luo commented 4 months ago

Hi! While the current implementation only uses 1 GPU by default, you can look into PyTorch Distributed Data Parallel (https://pytorch.org/docs/stable/notes/ddp.html) for multi-GPU support. You would also need to add some logic for checking the local_rank of the device and doing some operations in a non-parallel fashion (e.g., wandb logging or model checkpointing).

If folks have a strong interest in multi-GPU support please feel free to comment / like this thread, and I can start a branch for this.