Closed XCZhou520 closed 1 year ago
You need to do some adjustments for multi-gpu training. The easiest adaptation for multi-gpu training is using data-parallel. for this you just need to wrap the model with torch.nn.dataprallel. IF you need to adopt distributed-data-parallel, you need more changes. please refer PyTorch documentation.
Can the training process of diffusion model support multiple gpus? I tried using multiple gpus, but it didn't seem to work.