Open matthost opened 1 month ago
Perhaps I am doing something wrong? I tried numpy stacking the images which did not help.
Perhaps related to https://github.com/open-mmlab/mmdeploy/issues/2808
I found the Detector class has a batch
method so gave that a try:
detector.batch([img, img, img])
But it is passing a single image to the model it seems, which fails due to wrong dims:
[2024-08-14 16:07:31.873] [mmdeploy] [error] [trt_net.cpp:28] TRTNet: 3: [executionContext.cpp::validateInputBindings::2083] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::validateInputBindings::2083, condition: profileMinDims.d[i] <= dimensions.d[i]. Supplied binding dimension [1,3,800,1344] for bindings[0] exceed min ~ max range at index 0, maximum dimension in profile is 3, minimum dimension in profile is 3, but supplied dimension is 1.
The from mmdeploy.apis import inference_model
API seems to work if you pass a list to img
Checklist
Describe the bug
An mmdetection model was compiled with static batch size of 3. When using Python Detector API to perform inference on a batch of 3 images, it is failing with
RuntimeError: continuous uint8 HWC array expected
.Reproduction
Convert an MMDet model to TensorRT using MMDeploy with a static batch size > 1. Try to run batch inference:
Error:
RuntimeError: continuous uint8 HWC array expected
Environment
Error traceback