openseg-group / OCNet.pytorch

Please choose the openseg.pytorch project for the updated code that achieve SOTA on 6 benchmarks!
MIT License
812 stars 128 forks source link

Have you compared the inplace_sync_abn with other sync bn? #20

Open yu-changqian opened 6 years ago

yu-changqian commented 6 years ago

Have you used this sync-bn? And have you compared the speed and the final performance with this two sync-bn?

PkuRainBow commented 6 years ago

You are welcome to report the numbers based on the BN of other versions.

lyxlynn commented 6 years ago

@PkuRainBow I tried ysnc-bn but it will stop in certain iteration without any notion output . The utilization rate of GPU will drop to 0 and the memory useage are still full. I tried 8 TITAN, 6 TITAN, 4 TITAN to train, all failed. Only 2 TITAN seems normal until now. But when I used inplace -abn it seems normal....

PkuRainBow commented 6 years ago

@Liuyixuan95 So have you reproduced the results?

In fact, I just employ the inplace-abn and it seems that the inplace-abn does not support 8x gpu cards.

We are preparing a new work that enable you to train the OCNet with 4x Pascal Titan GPUs.

Please keep waiting for our new work. We will try to release the paper by the end of the Dec.

foolwood commented 5 years ago

@PkuRainBow encounter the same problem. It's hard to understand why stuck with 8 gpus. and unfortunately, I only have a machine with 8 GTX1080ti, which can only contain 1 image per gpu...

Looking forward to the optimized code.