Closed TheRealCameronEvans closed 1 year ago
Good morning
Yes, to run across multiple GPUs, this is already implemented in the code.
You will need to set the flag use_ddp = True
, you will enter the torch DistributedDataParallel (DDP) in the file guided_diffusion/train_util.py .
Let me know whether this answers your question. Best, Julia
Hello, thanks for sharing your code! Would it be possible for me to run the processes across multiple GPUs? I have a few low powered ones so this would speed my jobs up a lot. Thanks!