facebookresearch / maskrcnn-benchmark

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
MIT License
9.29k stars 2.5k forks source link

Accelerate Inference #776

Open AlenUbuntu opened 5 years ago

AlenUbuntu commented 5 years ago

Hello all:

I am currently trying to run tools/test_net.py to make inference on COCO detection test dataset but find the inference is pretty slow. It takes on average 0.6 s to predict on a single image. I am wondering if there is any way to accelerate the inference?

The hardware used is a single RTX 2080 Ti GPU.

Thank you

ClimbsRocks commented 5 years ago

I too have found this curious. With your same hardware, I'm able to get "live" predictions from a webcam in 0.2 seconds each. Yet I also find that the test_net.py script takes significantly longer per prediction (something in the range of 1 second each for my X-152 based model). Changing the batch size didn't seem to do too much for me.

I did some quick, rough attempts at performance profiling (using the awesome https://github.com/rkern/line_profiler) to see if maybe it was the scoring code, rather than the prediction code, that was slowing things down. From what I remember, it looked like it was model predictions that were taking up most of the time, though I didn't spend too much time on this.

I ended up just reducing my test set size by 90% while developing, and that gave me pretty good estimates at model performance. But obviously I'd love to hear any thoughts about how to speed this up!

zimenglan-sysu-512 commented 5 years ago

hi @ClimbsRocks it may be caused by the image pre-processing (like decreasing the input size, use max_size to limit the input, rewriting the input transform) or the nms step. u can opt them to get fast speed.

dc-cheny commented 3 years ago

hey, @zimenglan-sysu-512 did u try to rewrite the code ( input transform ) you mentioned above? I think it exactly is the root cause