Open bigrobinson opened 4 years ago
Enable training with multiple GPU's as well as saving model generically when using dataparallel. Also call lr_scheduler after optimizer to bring in line with Pytorch 1.4 requirements.
Enable training with multiple GPU's as well as saving model generically when using dataparallel. Also call lr_scheduler after optimizer to bring in line with Pytorch 1.4 requirements.