Closed BangguWu closed 4 years ago
I have 8 RTX 2080TI, but when I am training ade20k-resnet50dilated-ppm_deepsup.yaml, it will appear out of memory. But I can train it on 4 RTX 2080TI, How should I adjust the hyperparameters like learning rate or others?
Try smaller batch size, performance might get worse though.
I have 8 RTX 2080TI, but when I am training ade20k-resnet50dilated-ppm_deepsup.yaml, it will appear out of memory. But I can train it on 4 RTX 2080TI, How should I adjust the hyperparameters like learning rate or others?