Open chirag1903 opened 2 years ago
For training on 1024 batch size, it is still showing cuda out of memory error on 5 gpus. Is distributed training is available for this implementation?
For training on 1024 batch size, it is still showing cuda out of memory error on 5 gpus. Is distributed training is available for this implementation?