Open lyjdwh opened 3 years ago
I am not sure.
I found a similar error in other repos with solution, see if it can help you.
And tensorrt 7.0 is not recommended. It might meet memory leak on some device.
This solution works! Thanks so much!
But when I use the converted tensorrt model, I get unexpected key in source state_dict: engine, input_names, output_names
. I use mmdet 2.13.0
from mmdet.apis import inference_detector, init_detector
import os
# Choose to use a config and initialize the detector
config = '/root/mmdetection/zyys_great_experiments/config_ikcest.py'
# Setup a checkpoint file to load
checkpoint = '/root/mmdetection/work_dirs/config_ikcest/latest_tensorrt.pth'
# initialize the detector
model = init_detector(config, checkpoint, device='cuda:1')
# Use the detector to do inference
imgs = [os.path.join("/root/databases/ikcest/test_dataset", file) for file in os.listdir("/root/databases/ikcest/test_dataset")]
for i in range(20):
test_imgs = imgs[i*20: (i+1)*20]
result = inference_detector(model, test_imgs)
If you are using mmdetection-to-tensorrt to convert model. The inference interface is in here, not the official one. here is the demo. I haven't tested the repo on mmdetection2.13,if you find any problem, please feel free to report here.
I changed to mmdet 2.10, but I still have some error. Do you known why?
(open-mmlab) root@bms--Ubuntu:~/mmdetection/zyys_great_experiments# python test_tensorrt.py
0
[TensorRT] ERROR: Parameter check failed at: engine.cpp::setBindingDimensions::1046, condition: profileMinDims.d[i] <=
dimensions.d[i]
[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1228, condition: allInputDimensionsSpecified(routine)
Traceback (most recent call last):
File "test_tensorrt.py", line 27, in <module>
result = inference_detector(model, test_img, config, "cuda:1")
File "/root/mmdetection-to-tensorrt/mmdet2trt/apis/inference.py", line 48, in inference_detector
result = model(tensor)
File "/root/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/root/torch2trt_dynamic/torch2trt_dynamic/torch2trt_dynamic.py", line 478, in forward
shape = tuple(self.context.get_binding_shape(idx))
ValueError: __len__() should return >= 0
When I make this, I get this error. I use cuda 10.2, cudnn 7.6.5, tensorrt 7.0.0.11
Do you have any idea? Thanks so much!