Dominic23331 / rtmpose_tensorrt

16 stars 0 forks source link

[03/05/2024-13:55:15] [TRT] [E] 1: [pluginV2Runner.cpp::nvinfer1::rt::load::293] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry) [03/05/2024-13:55:15] [TRT] [E] 4: [runtime.cpp::nvinfer1::Runtime::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.) #5

Open JackIRose opened 6 months ago

JackIRose commented 6 months ago

使用python tools/deploy.py configs/mmdet/detection/detection_tensorrt_dynamic-64x64-800x800.py rtmdet_tiny_8xb32-300e_coco.py rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth det.jpg --work-dir mmdeploy/RTMdet_t_trt --device cuda --dump-info后,能够成功生成engine文件并输出测试图片。 但是当我使用test.py加载engine文件进行推理时会报一下错误: 03/05 13:55:14 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 03/05 13:55:14 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 03/05 13:55:14 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "backend_detectors" registry tree. As a workaround, the current "backend_detectors" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 03/05 13:55:14 - mmengine - WARNING - Could not load the library of tensorrt plugins. Because the file does not exist: [03/05/2024-13:55:15] [TRT] [E] 1: [pluginV2Runner.cpp::nvinfer1::rt::load::293] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry) [03/05/2024-13:55:15] [TRT] [E] 4: [runtime.cpp::nvinfer1::Runtime::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.) Traceback (most recent call last): File "shijian.py", line 25, in model = task_processor.build_backend_model(backend_model) File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\codebase\mmdet\deploy\object_detection.py", line 159, in build_backend_model model = build_object_detection_model( File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\codebase\mmdet\deploy\object_detection_model.py", line 1111, in build_object_detection_model backend_detector = BACKEND_MODEL.build( File "C:\Users\xuguixun.conda\envs\python38\lib\site-packages\mmengine\registry\registry.py", line 570, in build return self.build_func(cfg, *args, kwargs, registry=self) File "C:\Users\xuguixun.conda\envs\python38\lib\site-packages\mmengine\registry\build_functions.py", line 121, in build_from_cfg obj = obj_cls(args) # type: ignore File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\codebase\mmdet\deploy\object_detection_model.py", line 56, in init self._init_wrapper( File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\codebase\mmdet\deploy\object_detection_model.py", line 70, in _init_wrapper self.wrapper = BaseBackendModel._build_wrapper( File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\codebase\base\backend_model.py", line 65, in _build_wrapper return backend_mgr.build_wrapper(backend_files, device, input_names, File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\backend\tensorrt\backend_manager.py", line 34, in build_wrapper return TRTWrapper(engine=backend_files[0], output_names=output_names) File "E:\工厂机器视觉-行为动作识别\mmdeploy-main\mmdeploy\backend\tensorrt\wrapper.py", line 90, in init__ raise TypeError(f'engine should be str or trt.ICudaEngine, \ TypeError: engine should be str or trt.ICudaEngine, but given: <class 'NoneType'>

test.py测试文件如下 from mmdeploy.apis.utils import build_task_processor from mmdeploy.utils import get_input_shape, load_config import time import tensorrt as trt

deploy_cfg = 'configs/mmdet/detection/detection_tensorrt_dynamic-64x64-800x800.py' model_cfg = 'rtmdet_tiny_8xb32-300e_coco.py' device = 'cuda' backend_model = ['mmdeploy/RTMdet_t_trt/end2end.engine'] image = 'cityscapes.png'

read deploy_cfg and model_cfg

deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)

build task and backend model

task_processor = build_task_processor(model_cfg, deploy_cfg, device) model = task_processor.build_backend_model(backend_model)

process input image

input_shape = get_input_shape(deploy_cfg) modelinputs, = task_processor.create_input(image, input_shape)

do model inference

num_warmup = 5 pure_inf_time = 0

result = model.test_step(model_inputs)

visualize results

task_processor.visualize( image=image, model=model, result=result[0], window_name='visualize', output_file='output_pose.png')

Dominic23331 commented 6 months ago

我在使用mmdeploy时仅仅用来转换模型,具体运行我不太了解,你可以去mmdeploy中提一下issue。

JackIRose commented 6 months ago

我在使用mmdeploy时仅仅用来转换模型,具体运行我不太了解,你可以去mmdeploy中提一下issue。

方便描述下RTMdet和RTMpose从pytorch转换到tensorRT的详细流程吗,非常感谢♪(・ω・)ノ!

Dominic23331 commented 6 months ago

请看README.md

JackIRose commented 6 months ago

请看README.md

非常感谢您之前的解答,我使用mmdeploy自导的deploy.py文件能够将该项目下的17个关键点检测成功运行; 但是当我使用deploy.py转换全身关键点后,在python环境下能够正常生成全身关键点的检测,但到C++环境中生成的图片完全不对,是因为该C++代码只支持17个关键点检测的RTMpose模型吗,期待您的答复,将感激不尽; 下面分别是Python环境加载engine和C++环境加载engine生成的检测图 output_tensorrt 屏幕截图 2024-03-07 100607

Dominic23331 commented 6 months ago

对,这个代码只支持COCO数据集当中训练的模型,全身关键点需要修改一下才能使用

JackIRose commented 6 months ago

对,这个代码只支持COCO数据集当中训练的模型,全身关键点需要修改一下才能使用 具体是修改哪个部分的代码

Dominic23331 commented 6 months ago

需要修改一下模型的解码部分,我这里只考虑了17个关键点的解码