WXinlong / SOLO

SOLO and SOLOv2 for instance segmentation, ECCV 2020 & NeurIPS 2020.
Other
1.71k stars 306 forks source link

Batch inference #110

Closed wuyuanyi135 closed 4 years ago

wuyuanyi135 commented 4 years ago

Hello, Thanks for the great project. I am trying to adapt SOLOv2 into my project and I would like to use batch inference to use my GPU more efficiently. Currently, I noticed these limitations In file mmdet\apis\inference.py, although the docstring specify that the img could be a list of instances, when passing in the list argument, the data = test_pipeline(data) will fail because it does not support batched input.

I tried to bypass this limitation by writing a wrapper to map the pipeline to each image and it success. Next, I tried to batch the tensors by doing this dirty hack data['img'] = [torch.cat([d_each['img'][0] for d_each in d], dim=0)] I was able to get a batched tensor but in mmdet\models\detectors\base.py#124 the assertion prevents me to proceed assert imgs_per_gpu == 1

I wonder whether it is possible to perform batched inference with the current codebase to get the inference speed on par to the reported speed?

Regards, YW

WXinlong commented 4 years ago

@wuyuanyi135 It is because that mmdetection didn't support batch inference. But I noticed that they recently support it as in their changelog. You can refer to their updates. I also provided the SOLO implementation in mmdet v2, which may help.

wuyuanyi135 commented 4 years ago

Thanks for your response 👍