Closed wuyuanyi135 closed 4 years ago
@wuyuanyi135 It is because that mmdetection didn't support batch inference. But I noticed that they recently support it as in their changelog. You can refer to their updates. I also provided the SOLO implementation in mmdet v2, which may help.
Thanks for your response 👍
Hello, Thanks for the great project. I am trying to adapt SOLOv2 into my project and I would like to use batch inference to use my GPU more efficiently. Currently, I noticed these limitations In file
mmdet\apis\inference.py
, although the docstring specify that theimg
could be a list of instances, when passing in the list argument, thedata = test_pipeline(data)
will fail because it does not support batched input.I tried to bypass this limitation by writing a wrapper to map the pipeline to each image and it success. Next, I tried to batch the tensors by doing this dirty hack
data['img'] = [torch.cat([d_each['img'][0] for d_each in d], dim=0)]
I was able to get a batched tensor but inmmdet\models\detectors\base.py
#124 the assertion prevents me to proceedassert imgs_per_gpu == 1
I wonder whether it is possible to perform batched inference with the current codebase to get the inference speed on par to the reported speed?
Regards, YW