Open Lizhuoling opened 4 weeks ago
Does the code support distributed training using multiple GPUs? Or the training only needs 1 GPU so multi-gpu training is not needed?
Our current training is all run on a single GPU (A100) so the code does not support multi-GPU training at the moment. It shouldn't be hard to add though by using PyTorch DDP.
Does the code support distributed training using multiple GPUs? Or the training only needs 1 GPU so multi-gpu training is not needed?