TuSimple / mx-maskrcnn

An MXNet implementation of Mask R-CNN
Apache License 2.0
1.76k stars 550 forks source link

The speed of train_maskrcnn on coco #118

Open zl1994 opened 6 years ago

zl1994 commented 6 years ago

I use 4 GTX 1080(single image per GPU) to alternatively train mask r-cnn on COCO. When training RPN, the speed can reach 8 sample/sec. But when training mask r-cnn, it varies and slow. Sometimes 2 sample/sec and some times 0.1 sample/sec, and the Volatile GPU-util is 0 in most of the time. In conclusion, there are three question:

  1. Why train RPN is much than train mask r-cnn?
  2. Why the speed of train mask r-cnn varies?
  3. Why the Volatile GPU-util is 0, does it cause training mask r-cnn slow?
xuw080 commented 6 years ago

I also have similar questions, don't know how to solve it. My speed is 0.7 samples per second. Also very slow

gaosanyuan commented 6 years ago

I found that, the most time is cost for getting the batch data. I tried to prefetch the batch data in multipy process, but it didn't work(still slow). Any other solutions? @xuw080 @zl1994

rxqy commented 6 years ago

Hi, I also encountered a similar problem here. Any suggestions?