I have been trying to enable training on a a multi GPU machine, using the DistributedDataParallel method. I am realising that the current dataloader may need to be modified if it were to support this. Just wondered if anyone else had had a go at this also?
Hi there,
I have been trying to enable training on a a multi GPU machine, using the DistributedDataParallel method. I am realising that the current dataloader may need to be modified if it were to support this. Just wondered if anyone else had had a go at this also?
Cheers! Paul