Open hekmon opened 3 weeks ago
I used pytorch lightning to implement the multi-gpu before, instead of torch multi-gpu you said. The challenge would be lied on the sync for the degradation model. However, I have to say that I also only use one 24 GB memory GPU to train the DAT model. You need to decrease the batch size such that it is trainable.
The DAT model can be very heavy, even on a 3090, when a lots of images needs to be upscalled. Is there any chance you could implements multi-gpu in order for a second card to be active ?
I have no clue how to use torch multi-gpu myself.
Thanks.