Open dmitsf opened 3 years ago
Can you tell us the pytorch version?
I use pytorch 1.9.0.
Are you using torch.distributed.run?
I don't use it at the moment. I followed mnist example to adjust my training script.
Can you please show us the script and the YAML file? PyTorch 1.9 introduced elastic training and it may hang.
Hello! I'm setting up training with PyTorchJobs. I have the problem: if one of the pods (doesn't matter, master or worker) reloads, the whole process hangs. The reason for reloading can be different, usually, it's due to Google Cloud Engine node rescheduling. Also, I tried to kill pods myself - the behavior was the same. Can I avoid this behavior and make training tolerant to pods' reloading?