Closed huliang2016 closed 1 year ago
@lvhan028
Hi, @huliang2016 Sorry for late reply, I can't reproduce your problem.
While following your steps, I have to modify some things. First, as you change test_pipeline.max_width to 800, I have to change last dim of max_shape to 800 in deploy_cfg. Second, when convert the model, as crnn use single channel image as input according to this config, I have to use the text-recognition_tensorrt-fp16_dynamic-1x32x32-1x32x640.py
. To support batch inference, I also changed the batch dim of deploy_cfg to 1/8/16.
After converting the model, I used your test command to test with batch size from 1 to 16, and the results are all nideployismsorgreator.
Have you modified the model config or if I missed someting?
Thanks for your reply.
Have you modified TRT_MODEL_PATH/pipeline.json
to support batch inference by set "is_batched": true
?
AS this issue suggests
Yes, I added "is_batched": true
also TextRecognizer
imported from mmdeploy_python
?
and do model inference by recognizer.batch([image] * 12)
?
Yes, as I describe above, I followed your steps except some modification to make the convert success.
that's strange... Do you have any suggestions? And how about your env
?
You log list libaries version like tensorrt: 8.4.3.1, MMDeploy: 0.8.0+14e31fb. I used the same version with yours.
Following your steps can't convert the model, so I modified config. Have you also modified the config as I said above?
Yes
My ./configs/mmocr/text-recognition/text-recognition_tensorrt-fp16_dynamic-1x32x32-1x32x640.py
file like follows:
_base_ = [
'./text-recognition_dynamic.py', '../../_base_/backends/tensorrt-fp16.py'
]
backend_config = dict(
common_config=dict(max_workspace_size=1 << 32),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 1, 32, 800],
opt_shape=[32, 1, 32, 800],
max_shape=[256, 1, 32, 800])))
])
@irexyc Could you please run this code in jupyter or in single python file?
at the first time, we run
for item in recognizer.batch([image] * 16):
print(item[0])
after that, in the same kernel, we run:
for item in recognizer.batch([image] * 12):
print(item[0])
I tried it, and the result still to be same.
Checklist
Describe the bug
After converting the CRNN model from torch to TensorRT, I would like to use batch inference to speed up. But the Inference result seems strange.
The origin image:
If we set batch_size to 2 or 16, the result seems reasonable.
But when we set batch_size to 3 or 12, the result seems strange.
Reproduction
step0. modify
/path_to_mmocr/configs/_base_/recog_pipelines/crnn_pipeline.py
, changingtest_pipeline.max_width=800
step1. For the convert model command, looks likes:step2. need to modify the exported
TRT_MODEL_PATH/pipeline.json
to support batch inference by set"is_batched": true
, as this issue suggests step3. Test command:Environment
Error traceback
No response