Closed yu-changqian closed 6 years ago
GPU benchmark: 8 x 1080 Ti Cuda version: 9.0 Pytorch version: 0.4.1
GPU benchmark: 8 x 1080 Ti
Cuda version: 9.0
Pytorch version: 0.4.1
Experiment config: batch size: 16 num workers: 16 input size: 480x480
batch size: 16
num workers: 16
input size: 480x480
When I use the sync bn on the ADE20K dataset, my experiment will stop at a certain iteration without other notion output. And the utilization rate of GPU will drop to 0. Did you have the similar experience?
Is this comment related? https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues/3#issuecomment-412139776
Closing the issue for now. Feel free to reopen it if you still have questions.
GPU benchmark: 8 x 1080 Ti
Cuda version: 9.0
Pytorch version: 0.4.1
Experiment config:
batch size: 16
num workers: 16
input size: 480x480
When I use the sync bn on the ADE20K dataset, my experiment will stop at a certain iteration without other notion output. And the utilization rate of GPU will drop to 0. Did you have the similar experience?