Closed YangHai-1218 closed 2 years ago
It's a legacy problem from MMDetection earlier vision. This repo is based on the MMdetection v2.2 where all models' post-processing are written under the condition that batch size is 1.
In higher vision MMDetection, this problem has been fixed.
Thanks! But do you have any plan to fix this problem or just update to a higher version MMDetection?
I have a plan to update the version of MMDetection, but it's a big project.😖 The update maybe takes a long time.
Thanks! It is truly a big project and it is really not easy to fix the problem. Anyway, looking forward to the updated version!
Hi, thanks for your work! But I was just wondering why you have to set the samples_per_gpu=1 when inference. Does there exist some bug? Or some other concern is within it?