grimoire / mmdetection-to-tensorrt

convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Apache License 2.0
590 stars 85 forks source link

_pickle.UnpicklingError: unpickling stack underflow #123

Closed xuweidongkobe closed 2 years ago

xuweidongkobe commented 2 years ago

@grimoire hello, I turn the yolox and fasterrcnn model to trt engine successfully in the docker , the script is :

import torch from mmdet2trt import mmdet2trt

opt_shape_param=[ [ [1,3,320,320], # min shape [1,3,800,1344], # optimize shape [1,3,1344,1344], # max shape ] ] max_workspace_size=1<<30 # some module and tactic need large workspace. cfg_path = "/root/space/my_test/faster_rcnn_r50_fpn_coco.py" weight_path ="/root/space/my_test/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth" save_model_path = "/root/space/my_test/fasterrcnn_nofp16.pth" save_engine_path = "/root/space/my_test/fasterrcnn_nofp16.engine" trt_model = mmdet2trt(cfg_path, weight_path, opt_shape_param=opt_shape_param, fp16_mode=False, max_workspace_size=max_workspace_size)

save converted model

torch.save(trt_model.state_dict(), save_model_path)

save engine if you want to use it in c++ api

with open(save_engine_path, mode='wb') as f: f.write(trt_model.state_dict()['engine'])

but , when I try to inference image ,the error is _pickle.UnpicklingError: unpickling stack underflow. two models are the same error, can you help me. the inference demo is:

from mmdet.apis import inference_detector from mmdet2trt.apis import create_wrap_detector

trt_model = "./fasterrcnn_nofp16.engine" cfg_path = "./faster_rcnn_r50_fpn_coco.py" device_id = "cuda:0"

create wrap detector

trt_detector = create_wrap_detector(trt_model, cfg_path, device_id) image_path = "./5329.png"

result share same format as mmdetection

result = inference_detector(trt_detector, image_path) print(result)

visualize

trt_detector.show_result( image_path, result, score_thr=0.4, win_name='mmdet2trt', show=Fasle)