Performing inference on a batch of images, via the following command in OnnxCaffe2Import.ipynb works.
outputs = caffe2.python.onnx.backend.run_model(model, [img])
But the detections are all stacked together so I can't tell which detections belong to which image.
So for example, if I passed in 3 images, I'll receive 20 detections without knowing how many detections belong to the 1st image, how many belong to the 2nd image etc.
Performing inference on a batch of images, via the following command in OnnxCaffe2Import.ipynb works.
outputs = caffe2.python.onnx.backend.run_model(model, [img])
But the detections are all stacked together so I can't tell which detections belong to which image. So for example, if I passed in 3 images, I'll receive 20 detections without knowing how many detections belong to the 1st image, how many belong to the 2nd image etc.
How can this be addressed?