Closed anse3832 closed 2 years ago
Hi,
Thanks for your question. As we use distributed parallel training, the data we prepared is for one single GPU. The pytorch will help us to prepare the data for all points in the world. In your modified implementation, I think it will report a dimension mismatch error when the world size is larger than 1.
When I tried to use multi GPU, the code shows an error (Unfortunately, I didn't save the error massage. It is related to dimension error)
So, I fixed the code in DASR/dasr/models/DASR_model.py as below, and it works well. ( multiplying self.opt['num_gpu'] )
Please check if my correction is adequate. Thanks!