Open abhiagwl4262 opened 4 years ago
At this moment the code for inference is only compatible with batch sizes = 1. The limitation comes from the final NMS processing which would be wrong if the batch size would be > 1. Although it is possible to refactor this code to make it compatible with batch sizes > 1, it would be a good exercise to implement it. Feel free it would say.
I created an extension of this repo with some added features. One of them is to use batch evaluation using pytorchs batched_nms
. You may check it out https://github.com/bishwarup307/pytorch-retinanet
Hi @bishwarup307 , your fork looks really promising. As soon i've some time i'll have a look at the code. thanks for sharing. Just a quick question, is the onnx export readable by opencv dnn ?.
Not really sure about that, it works with onnxruntime
though. will check.
@bishwarup307 , thanks for your reply, btw, if you're interested in the retinanet version which supports TorchScript, have a look at https://github.com/wvalcke/pytorch-retinanet It's based on the original retinanet master, so it's easy to see the changes.
Hey @mimoralea As the evaluation is being sequentially for each image, the process is too slow. Is there any other way for evaluation?