Closed TTgogogo closed 2 years ago
Should be the path passed to --work-dir
. Also --dump-info
must be added for the meta info needed by SDK to be generated when converting the models.
@lzhangzz I added "--dump-info" and converted the pth model to onnx format, with "--work-dir" being "myonnx". In the myonnx file, the following files exist: deploy.json, detail.json, end2end.onnx, output_onnxruntime, pipleline.json. Then I copy the myonnx file and put it under "D:" disk. I set the parameter model_path of mmdeploy_detector_create_by_path as "D:\myonnx". But the error "no ModelImpl can read sdk_model" still occured. Can you provide some help?
That's weird. Can you show me how you passed the path to the function?
Notice that if you pass the path as a literal string in C, it should be written as "D:\\myonnx"
@lzhangzz
The code is as follows:
auto device_name = "cpu"; auto model_path = "D:\myonnx"; auto image_path = "E:\test.jpg";
mm_handle_t detector{}; int status{}; status = mmdeploy_detector_create_by_path(model_path, device_name, 0, &detector);
try
auto device_name = "cpu";
auto model_path = "D:\\myonnx"; // <-
auto image_path = "E:\\test.jpg"; // <-
In fact, my code is as the same as your code above. Could it be possiable that the onnx model I converted is not correct? @lzhangzz
The command I used to convert the PTH model is as follows: python tools/deploy.py configs/mmdet/detection/detection_onnxruntime_dynamic.py ../mmdetection/configs/yolox/yolox_s_8x8_300e_coco.py ../mmdetection/work_dirs/yolox_s_8x8_300e_coco/latest.pth ../mmdetection/data/coco/val2017/23.jpg --work-dir myonnx --dump-info
Can you share the content of myonnx/deploy.json
?
@lzhangzz deploy.json: { "version": "0.6.0", "task": "Detector", "models": [ { "name": "yolox", "net": "end2end.onnx", "weights": "", "backend": "onnxruntime", "precision": "FP32", "batch_size": 1, "dynamic_shape": true } ], "customs": []
Could it caused by not installing onnxruntime backend before converting the pth model to the onnx format? @lzhangzz
Your deploy.json
is the same as mine.
Could it caused by not installing onnxruntime backend before converting the pth model to the onnx format?
Unlikely, if the conversion is successful.
Can you share all the logs when running the program?
@lzhangzz The logs in the console: [mmdeploy][error][model.cpp 44] no ModelImpl can read sdk_model D:\myonnx [mmdeploy][error][model.cpp 15] load model failed. Its file path is 'D:\mynonx' [mmdeploy][error][model.cpp 20]failed to create model: not supported (2) @ :0
Are you using static libs or dynamic libs (DLLs) build of mmdeploy?
I used this pre-built version: mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1 @lzhangzz
Linker, Input in VS studio: mmdeploy_detector.lib @lzhangzz
You might need this: https://github.com/open-mmlab/mmdeploy/issues/584
There are some errors: LNK 2005 _NULL_IMPORT_DESCRIPTOR is already defined in opencv_world346.lib(opencv_world346.dll) @lzhangzz
The prebuilt is based on OpenCV 4.5.5. If you are using OpenCV 3, you may need to build mmdeploy manually.
Also, it is recommended to try mmdeploy's cmake project (like the demos) first before creating your own VS project.
@TTgogogo is it still an issue?
Hi, I have the same issue. I list my information below:
python3 tools/deploy.py \
./configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
/workspace/MOT/open-mmlab/mmdetection/configs/yolox/yolox_tiny_8x8_300e_bdd100k.py \
/workspace/MOT/open-mmlab/mmdetection/work_dirs/yolox_tiny_8x8_300e_bdd100k/epoch_100.pth \
/workspace/MOT/open-mmlab/mmdetection/fe189115-8dabb5b1.jpg \
--work-dir ./work-dir \
--device cuda:0 \
--dump-info
[2022-07-21 15:27:54.766] [mmdeploy] [info] [model.cpp:95] Register 'DirectoryModel' [2022-07-21 15:27:55.823] [mmdeploy] [info] [model.cpp:95] Register 'DirectoryModel' [2022-07-21 15:27:57.437] [mmdeploy] [info] [model.cpp:95] Register 'DirectoryModel' 2022-07-21 15:27:57,440 - mmdeploy - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess load checkpoint from local path: /workspace/MOT/open-mmlab/mmdetection/work_dirs/yolox_tiny_8x8_300e_bdd100k/epoch_100.pth The model and loaded state dict do not match exactly
unexpected key in source state_dict: ema_backbone_stem_conv_conv_weight, ema_backbone_stem_conv_bn_weight, ema_backbone_stem_conv_bn_bias, ema_backbone_stem_conv_bn_running_mean, ema_backbone_stem_conv_bn_running_var, ema_backbone_stem_conv_bn_num_batches_tracked, ema_backbone_stage1_0_conv_weight, ema_backbone_stage1_0_bn_weight, ema_backbone_stage1_0_bn_bias, ema_backbone_stage1_0_bn_running_mean, ema_backbone_stage1_0_bn_running_var, ema_backbone_stage1_0_bn_num_batches_tracked, ema_backbone_stage1_1_main_conv_conv_weight, ema_backbone_stage1_1_main_conv_bn_weight, ema_backbone_stage1_1_main_conv_bn_bias, ema_backbone_stage1_1_main_conv_bn_running_mean, ema_backbone_stage1_1_main_conv_bn_running_var, ema_backbone_stage1_1_main_conv_bn_num_batches_tracked, ema_backbone_stage1_1_short_conv_conv_weight, ema_backbone_stage1_1_short_conv_bn_weight, ema_backbone_stage1_1_short_conv_bn_bias, ema_backbone_stage1_1_short_conv_bn_running_mean, ema_backbone_stage1_1_short_conv_bn_running_var, ema_backbone_stage1_1_short_conv_bn_num_batches_tracked, ema_backbone_stage1_1_final_conv_conv_weight, ema_backbone_stage1_1_final_conv_bn_weight, ema_backbone_stage1_1_final_conv_bn_bias, ema_backbone_stage1_1_final_conv_bn_running_mean, ema_backbone_stage1_1_final_conv_bn_running_var, ema_backbone_stage1_1_final_conv_bn_num_batches_tracked, ema_backbone_stage1_1_blocks_0_conv1_conv_weight, ema_backbone_stage1_1_blocks_0_conv1_bn_weight, ema_backbone_stage1_1_blocks_0_conv1_bn_bias, ema_backbone_stage1_1_blocks_0_conv1_bn_running_mean, ema_backbone_stage1_1_blocks_0_conv1_bn_running_var, ema_backbone_stage1_1_blocks_0_conv1_bn_num_batches_tracked, ema_backbone_stage1_1_blocks_0_conv2_conv_weight, ema_backbone_stage1_1_blocks_0_conv2_bn_weight, ema_backbone_stage1_1_blocks_0_conv2_bn_bias, ema_backbone_stage1_1_blocks_0_conv2_bn_running_mean, ema_backbone_stage1_1_blocks_0_conv2_bn_running_var, ema_backbone_stage1_1_blocks_0_conv2_bn_num_batches_tracked, ema_backbone_stage2_0_conv_weight, ema_backbone_stage2_0_bn_weight, ema_backbone_stage2_0_bn_bias, ema_backbone_stage2_0_bn_running_mean, ema_backbone_stage2_0_bn_running_var, ema_backbone_stage2_0_bn_num_batches_tracked, ema_backbone_stage2_1_main_conv_conv_weight, ema_backbone_stage2_1_main_conv_bn_weight, ema_backbone_stage2_1_main_conv_bn_bias, ema_backbone_stage2_1_main_conv_bn_running_mean, ema_backbone_stage2_1_main_conv_bn_running_var, ema_backbone_stage2_1_main_conv_bn_num_batches_tracked, ema_backbone_stage2_1_short_conv_conv_weight, ema_backbone_stage2_1_short_conv_bn_weight, ema_backbone_stage2_1_short_conv_bn_bias, ema_backbone_stage2_1_short_conv_bn_running_mean, ema_backbone_stage2_1_short_conv_bn_running_var, ema_backbone_stage2_1_short_conv_bn_num_batches_tracked, ema_backbone_stage2_1_final_conv_conv_weight, ema_backbone_stage2_1_final_conv_bn_weight, ema_backbone_stage2_1_final_conv_bn_bias, ema_backbone_stage2_1_final_conv_bn_running_mean, ema_backbone_stage2_1_final_conv_bn_running_var, ema_backbone_stage2_1_final_conv_bn_num_batches_tracked, ema_backbone_stage2_1_blocks_0_conv1_conv_weight, ema_backbone_stage2_1_blocks_0_conv1_bn_weight, ema_backbone_stage2_1_blocks_0_conv1_bn_bias, ema_backbone_stage2_1_blocks_0_conv1_bn_running_mean, ema_backbone_stage2_1_blocks_0_conv1_bn_running_var, ema_backbone_stage2_1_blocks_0_conv1_bn_num_batches_tracked, ema_backbone_stage2_1_blocks_0_conv2_conv_weight, ema_backbone_stage2_1_blocks_0_conv2_bn_weight, ema_backbone_stage2_1_blocks_0_conv2_bn_bias, ema_backbone_stage2_1_blocks_0_conv2_bn_running_mean, ema_backbone_stage2_1_blocks_0_conv2_bn_running_var, ema_backbone_stage2_1_blocks_0_conv2_bn_num_batches_tracked, ema_backbone_stage2_1_blocks_1_conv1_conv_weight, ema_backbone_stage2_1_blocks_1_conv1_bn_weight, ema_backbone_stage2_1_blocks_1_conv1_bn_bias, ema_backbone_stage2_1_blocks_1_conv1_bn_running_mean, ema_backbone_stage2_1_blocks_1_conv1_bn_running_var, ema_backbone_stage2_1_blocks_1_conv1_bn_num_batches_tracked, ema_backbone_stage2_1_blocks_1_conv2_conv_weight, ema_backbone_stage2_1_blocks_1_conv2_bn_weight, ema_backbone_stage2_1_blocks_1_conv2_bn_bias, ema_backbone_stage2_1_blocks_1_conv2_bn_running_mean, ema_backbone_stage2_1_blocks_1_conv2_bn_running_var, ema_backbone_stage2_1_blocks_1_conv2_bn_num_batches_tracked, ema_backbone_stage2_1_blocks_2_conv1_conv_weight, ema_backbone_stage2_1_blocks_2_conv1_bn_weight, ema_backbone_stage2_1_blocks_2_conv1_bn_bias, ema_backbone_stage2_1_blocks_2_conv1_bn_running_mean, ema_backbone_stage2_1_blocks_2_conv1_bn_running_var, ema_backbone_stage2_1_blocks_2_conv1_bn_num_batches_tracked, ema_backbone_stage2_1_blocks_2_conv2_conv_weight, ema_backbone_stage2_1_blocks_2_conv2_bn_weight, ema_backbone_stage2_1_blocks_2_conv2_bn_bias, ema_backbone_stage2_1_blocks_2_conv2_bn_running_mean, ema_backbone_stage2_1_blocks_2_conv2_bn_running_var, ema_backbone_stage2_1_blocks_2_conv2_bn_num_batches_tracked, ema_backbone_stage3_0_conv_weight, ema_backbone_stage3_0_bn_weight, ema_backbone_stage3_0_bn_bias, ema_backbone_stage3_0_bn_running_mean, ema_backbone_stage3_0_bn_running_var, ema_backbone_stage3_0_bn_num_batches_tracked, ema_backbone_stage3_1_main_conv_conv_weight, ema_backbone_stage3_1_main_conv_bn_weight, ema_backbone_stage3_1_main_conv_bn_bias, ema_backbone_stage3_1_main_conv_bn_running_mean, ema_backbone_stage3_1_main_conv_bn_running_var, ema_backbone_stage3_1_main_conv_bn_num_batches_tracked, ema_backbone_stage3_1_short_conv_conv_weight, ema_backbone_stage3_1_short_conv_bn_weight, ema_backbone_stage3_1_short_conv_bn_bias, ema_backbone_stage3_1_short_conv_bn_running_mean, ema_backbone_stage3_1_short_conv_bn_running_var, ema_backbone_stage3_1_short_conv_bn_num_batches_tracked, ema_backbone_stage3_1_final_conv_conv_weight, ema_backbone_stage3_1_final_conv_bn_weight, ema_backbone_stage3_1_final_conv_bn_bias, ema_backbone_stage3_1_final_conv_bn_running_mean, ema_backbone_stage3_1_final_conv_bn_running_var, ema_backbone_stage3_1_final_conv_bn_num_batches_tracked, ema_backbone_stage3_1_blocks_0_conv1_conv_weight, ema_backbone_stage3_1_blocks_0_conv1_bn_weight, ema_backbone_stage3_1_blocks_0_conv1_bn_bias, ema_backbone_stage3_1_blocks_0_conv1_bn_running_mean, ema_backbone_stage3_1_blocks_0_conv1_bn_running_var, ema_backbone_stage3_1_blocks_0_conv1_bn_num_batches_tracked, ema_backbone_stage3_1_blocks_0_conv2_conv_weight, ema_backbone_stage3_1_blocks_0_conv2_bn_weight, ema_backbone_stage3_1_blocks_0_conv2_bn_bias, ema_backbone_stage3_1_blocks_0_conv2_bn_running_mean, ema_backbone_stage3_1_blocks_0_conv2_bn_running_var, ema_backbone_stage3_1_blocks_0_conv2_bn_num_batches_tracked, ema_backbone_stage3_1_blocks_1_conv1_conv_weight, ema_backbone_stage3_1_blocks_1_conv1_bn_weight, ema_backbone_stage3_1_blocks_1_conv1_bn_bias, ema_backbone_stage3_1_blocks_1_conv1_bn_running_mean, ema_backbone_stage3_1_blocks_1_conv1_bn_running_var, ema_backbone_stage3_1_blocks_1_conv1_bn_num_batches_tracked, ema_backbone_stage3_1_blocks_1_conv2_conv_weight, ema_backbone_stage3_1_blocks_1_conv2_bn_weight, ema_backbone_stage3_1_blocks_1_conv2_bn_bias, ema_backbone_stage3_1_blocks_1_conv2_bn_running_mean, ema_backbone_stage3_1_blocks_1_conv2_bn_running_var, ema_backbone_stage3_1_blocks_1_conv2_bn_num_batches_tracked, ema_backbone_stage3_1_blocks_2_conv1_conv_weight, ema_backbone_stage3_1_blocks_2_conv1_bn_weight, ema_backbone_stage3_1_blocks_2_conv1_bn_bias, ema_backbone_stage3_1_blocks_2_conv1_bn_running_mean, ema_backbone_stage3_1_blocks_2_conv1_bn_running_var, ema_backbone_stage3_1_blocks_2_conv1_bn_num_batches_tracked, ema_backbone_stage3_1_blocks_2_conv2_conv_weight, ema_backbone_stage3_1_blocks_2_conv2_bn_weight, ema_backbone_stage3_1_blocks_2_conv2_bn_bias, ema_backbone_stage3_1_blocks_2_conv2_bn_running_mean, ema_backbone_stage3_1_blocks_2_conv2_bn_running_var, ema_backbone_stage3_1_blocks_2_conv2_bn_num_batches_tracked, ema_backbone_stage4_0_conv_weight, ema_backbone_stage4_0_bn_weight, ema_backbone_stage4_0_bn_bias, ema_backbone_stage4_0_bn_running_mean, ema_backbone_stage4_0_bn_running_var, ema_backbone_stage4_0_bn_num_batches_tracked, ema_backbone_stage4_1_conv1_conv_weight, ema_backbone_stage4_1_conv1_bn_weight, ema_backbone_stage4_1_conv1_bn_bias, ema_backbone_stage4_1_conv1_bn_running_mean, ema_backbone_stage4_1_conv1_bn_running_var, ema_backbone_stage4_1_conv1_bn_num_batches_tracked, ema_backbone_stage4_1_conv2_conv_weight, ema_backbone_stage4_1_conv2_bn_weight, ema_backbone_stage4_1_conv2_bn_bias, ema_backbone_stage4_1_conv2_bn_running_mean, ema_backbone_stage4_1_conv2_bn_running_var, ema_backbone_stage4_1_conv2_bn_num_batches_tracked, ema_backbone_stage4_2_main_conv_conv_weight, ema_backbone_stage4_2_main_conv_bn_weight, ema_backbone_stage4_2_main_conv_bn_bias, ema_backbone_stage4_2_main_conv_bn_running_mean, ema_backbone_stage4_2_main_conv_bn_running_var, ema_backbone_stage4_2_main_conv_bn_num_batches_tracked, ema_backbone_stage4_2_short_conv_conv_weight, ema_backbone_stage4_2_short_conv_bn_weight, ema_backbone_stage4_2_short_conv_bn_bias, ema_backbone_stage4_2_short_conv_bn_running_mean, ema_backbone_stage4_2_short_conv_bn_running_var, ema_backbone_stage4_2_short_conv_bn_num_batches_tracked, ema_backbone_stage4_2_final_conv_conv_weight, ema_backbone_stage4_2_final_conv_bn_weight, ema_backbone_stage4_2_final_conv_bn_bias, ema_backbone_stage4_2_final_conv_bn_running_mean, ema_backbone_stage4_2_final_conv_bn_running_var, ema_backbone_stage4_2_final_conv_bn_num_batches_tracked, ema_backbone_stage4_2_blocks_0_conv1_conv_weight, ema_backbone_stage4_2_blocks_0_conv1_bn_weight, ema_backbone_stage4_2_blocks_0_conv1_bn_bias, ema_backbone_stage4_2_blocks_0_conv1_bn_running_mean, ema_backbone_stage4_2_blocks_0_conv1_bn_running_var, ema_backbone_stage4_2_blocks_0_conv1_bn_num_batches_tracked, ema_backbone_stage4_2_blocks_0_conv2_conv_weight, ema_backbone_stage4_2_blocks_0_conv2_bn_weight, ema_backbone_stage4_2_blocks_0_conv2_bn_bias, ema_backbone_stage4_2_blocks_0_conv2_bn_running_mean, ema_backbone_stage4_2_blocks_0_conv2_bn_running_var, ema_backbone_stage4_2_blocks_0_conv2_bn_num_batches_tracked, ema_neck_reduce_layers_0_conv_weight, ema_neck_reduce_layers_0_bn_weight, ema_neck_reduce_layers_0_bn_bias, ema_neck_reduce_layers_0_bn_running_mean, ema_neck_reduce_layers_0_bn_running_var, ema_neck_reduce_layers_0_bn_num_batches_tracked, ema_neck_reduce_layers_1_conv_weight, ema_neck_reduce_layers_1_bn_weight, ema_neck_reduce_layers_1_bn_bias, ema_neck_reduce_layers_1_bn_running_mean, ema_neck_reduce_layers_1_bn_running_var, ema_neck_reduce_layers_1_bn_num_batches_tracked, ema_neck_top_down_blocks_0_main_conv_conv_weight, ema_neck_top_down_blocks_0_main_conv_bn_weight, ema_neck_top_down_blocks_0_main_conv_bn_bias, ema_neck_top_down_blocks_0_main_conv_bn_running_mean, ema_neck_top_down_blocks_0_main_conv_bn_running_var, ema_neck_top_down_blocks_0_main_conv_bn_num_batches_tracked, ema_neck_top_down_blocks_0_short_conv_conv_weight, ema_neck_top_down_blocks_0_short_conv_bn_weight, ema_neck_top_down_blocks_0_short_conv_bn_bias, ema_neck_top_down_blocks_0_short_conv_bn_running_mean, ema_neck_top_down_blocks_0_short_conv_bn_running_var, ema_neck_top_down_blocks_0_short_conv_bn_num_batches_tracked, ema_neck_top_down_blocks_0_final_conv_conv_weight, ema_neck_top_down_blocks_0_final_conv_bn_weight, ema_neck_top_down_blocks_0_final_conv_bn_bias, ema_neck_top_down_blocks_0_final_conv_bn_running_mean, ema_neck_top_down_blocks_0_final_conv_bn_running_var, ema_neck_top_down_blocks_0_final_conv_bn_num_batches_tracked, ema_neck_top_down_blocks_0_blocks_0_conv1_conv_weight, ema_neck_top_down_blocks_0_blocks_0_conv1_bn_weight, ema_neck_top_down_blocks_0_blocks_0_conv1_bn_bias, ema_neck_top_down_blocks_0_blocks_0_conv1_bn_running_mean, ema_neck_top_down_blocks_0_blocks_0_conv1_bn_running_var, ema_neck_top_down_blocks_0_blocks_0_conv1_bn_num_batches_tracked, ema_neck_top_down_blocks_0_blocks_0_conv2_conv_weight, ema_neck_top_down_blocks_0_blocks_0_conv2_bn_weight, ema_neck_top_down_blocks_0_blocks_0_conv2_bn_bias, ema_neck_top_down_blocks_0_blocks_0_conv2_bn_running_mean, ema_neck_top_down_blocks_0_blocks_0_conv2_bn_running_var, ema_neck_top_down_blocks_0_blocks_0_conv2_bn_num_batches_tracked, ema_neck_top_down_blocks_1_main_conv_conv_weight, ema_neck_top_down_blocks_1_main_conv_bn_weight, ema_neck_top_down_blocks_1_main_conv_bn_bias, ema_neck_top_down_blocks_1_main_conv_bn_running_mean, ema_neck_top_down_blocks_1_main_conv_bn_running_var, ema_neck_top_down_blocks_1_main_conv_bn_num_batches_tracked, ema_neck_top_down_blocks_1_short_conv_conv_weight, ema_neck_top_down_blocks_1_short_conv_bn_weight, ema_neck_top_down_blocks_1_short_conv_bn_bias, ema_neck_top_down_blocks_1_short_conv_bn_running_mean, ema_neck_top_down_blocks_1_short_conv_bn_running_var, ema_neck_top_down_blocks_1_short_conv_bn_num_batches_tracked, ema_neck_top_down_blocks_1_final_conv_conv_weight, ema_neck_top_down_blocks_1_final_conv_bn_weight, ema_neck_top_down_blocks_1_final_conv_bn_bias, ema_neck_top_down_blocks_1_final_conv_bn_running_mean, ema_neck_top_down_blocks_1_final_conv_bn_running_var, ema_neck_top_down_blocks_1_final_conv_bn_num_batches_tracked, ema_neck_top_down_blocks_1_blocks_0_conv1_conv_weight, ema_neck_top_down_blocks_1_blocks_0_conv1_bn_weight, ema_neck_top_down_blocks_1_blocks_0_conv1_bn_bias, ema_neck_top_down_blocks_1_blocks_0_conv1_bn_running_mean, ema_neck_top_down_blocks_1_blocks_0_conv1_bn_running_var, ema_neck_top_down_blocks_1_blocks_0_conv1_bn_num_batches_tracked, ema_neck_top_down_blocks_1_blocks_0_conv2_conv_weight, ema_neck_top_down_blocks_1_blocks_0_conv2_bn_weight, ema_neck_top_down_blocks_1_blocks_0_conv2_bn_bias, ema_neck_top_down_blocks_1_blocks_0_conv2_bn_running_mean, ema_neck_top_down_blocks_1_blocks_0_conv2_bn_running_var, ema_neck_top_down_blocks_1_blocks_0_conv2_bn_num_batches_tracked, ema_neck_downsamples_0_conv_weight, ema_neck_downsamples_0_bn_weight, ema_neck_downsamples_0_bn_bias, ema_neck_downsamples_0_bn_running_mean, ema_neck_downsamples_0_bn_running_var, ema_neck_downsamples_0_bn_num_batches_tracked, ema_neck_downsamples_1_conv_weight, ema_neck_downsamples_1_bn_weight, ema_neck_downsamples_1_bn_bias, ema_neck_downsamples_1_bn_running_mean, ema_neck_downsamples_1_bn_running_var, ema_neck_downsamples_1_bn_num_batches_tracked, ema_neck_bottom_up_blocks_0_main_conv_conv_weight, ema_neck_bottom_up_blocks_0_main_conv_bn_weight, ema_neck_bottom_up_blocks_0_main_conv_bn_bias, ema_neck_bottom_up_blocks_0_main_conv_bn_running_mean, ema_neck_bottom_up_blocks_0_main_conv_bn_running_var, ema_neck_bottom_up_blocks_0_main_conv_bn_num_batches_tracked, ema_neck_bottom_up_blocks_0_short_conv_conv_weight, ema_neck_bottom_up_blocks_0_short_conv_bn_weight, ema_neck_bottom_up_blocks_0_short_conv_bn_bias, ema_neck_bottom_up_blocks_0_short_conv_bn_running_mean, ema_neck_bottom_up_blocks_0_short_conv_bn_running_var, ema_neck_bottom_up_blocks_0_short_conv_bn_num_batches_tracked, ema_neck_bottom_up_blocks_0_final_conv_conv_weight, ema_neck_bottom_up_blocks_0_final_conv_bn_weight, ema_neck_bottom_up_blocks_0_final_conv_bn_bias, ema_neck_bottom_up_blocks_0_final_conv_bn_running_mean, ema_neck_bottom_up_blocks_0_final_conv_bn_running_var, ema_neck_bottom_up_blocks_0_final_conv_bn_num_batches_tracked, ema_neck_bottom_up_blocks_0_blocks_0_conv1_conv_weight, ema_neck_bottom_up_blocks_0_blocks_0_conv1_bn_weight, ema_neck_bottom_up_blocks_0_blocks_0_conv1_bn_bias, ema_neck_bottom_up_blocks_0_blocks_0_conv1_bn_running_mean, ema_neck_bottom_up_blocks_0_blocks_0_conv1_bn_running_var, ema_neck_bottom_up_blocks_0_blocks_0_conv1_bn_num_batches_tracked, ema_neck_bottom_up_blocks_0_blocks_0_conv2_conv_weight, ema_neck_bottom_up_blocks_0_blocks_0_conv2_bn_weight, ema_neck_bottom_up_blocks_0_blocks_0_conv2_bn_bias, ema_neck_bottom_up_blocks_0_blocks_0_conv2_bn_running_mean, ema_neck_bottom_up_blocks_0_blocks_0_conv2_bn_running_var, ema_neck_bottom_up_blocks_0_blocks_0_conv2_bn_num_batches_tracked, ema_neck_bottom_up_blocks_1_main_conv_conv_weight, ema_neck_bottom_up_blocks_1_main_conv_bn_weight, ema_neck_bottom_up_blocks_1_main_conv_bn_bias, ema_neck_bottom_up_blocks_1_main_conv_bn_running_mean, ema_neck_bottom_up_blocks_1_main_conv_bn_running_var, ema_neck_bottom_up_blocks_1_main_conv_bn_num_batches_tracked, ema_neck_bottom_up_blocks_1_short_conv_conv_weight, ema_neck_bottom_up_blocks_1_short_conv_bn_weight, ema_neck_bottom_up_blocks_1_short_conv_bn_bias, ema_neck_bottom_up_blocks_1_short_conv_bn_running_mean, ema_neck_bottom_up_blocks_1_short_conv_bn_running_var, ema_neck_bottom_up_blocks_1_short_conv_bn_num_batches_tracked, ema_neck_bottom_up_blocks_1_final_conv_conv_weight, ema_neck_bottom_up_blocks_1_final_conv_bn_weight, ema_neck_bottom_up_blocks_1_final_conv_bn_bias, ema_neck_bottom_up_blocks_1_final_conv_bn_running_mean, ema_neck_bottom_up_blocks_1_final_conv_bn_running_var, ema_neck_bottom_up_blocks_1_final_conv_bn_num_batches_tracked, ema_neck_bottom_up_blocks_1_blocks_0_conv1_conv_weight, ema_neck_bottom_up_blocks_1_blocks_0_conv1_bn_weight, ema_neck_bottom_up_blocks_1_blocks_0_conv1_bn_bias, ema_neck_bottom_up_blocks_1_blocks_0_conv1_bn_running_mean, ema_neck_bottom_up_blocks_1_blocks_0_conv1_bn_running_var, ema_neck_bottom_up_blocks_1_blocks_0_conv1_bn_num_batches_tracked, ema_neck_bottom_up_blocks_1_blocks_0_conv2_conv_weight, ema_neck_bottom_up_blocks_1_blocks_0_conv2_bn_weight, ema_neck_bottom_up_blocks_1_blocks_0_conv2_bn_bias, ema_neck_bottom_up_blocks_1_blocks_0_conv2_bn_running_mean, ema_neck_bottom_up_blocks_1_blocks_0_conv2_bn_running_var, ema_neck_bottom_up_blocks_1_blocks_0_conv2_bn_num_batches_tracked, ema_neck_out_convs_0_conv_weight, ema_neck_out_convs_0_bn_weight, ema_neck_out_convs_0_bn_bias, ema_neck_out_convs_0_bn_running_mean, ema_neck_out_convs_0_bn_running_var, ema_neck_out_convs_0_bn_num_batches_tracked, ema_neck_out_convs_1_conv_weight, ema_neck_out_convs_1_bn_weight, ema_neck_out_convs_1_bn_bias, ema_neck_out_convs_1_bn_running_mean, ema_neck_out_convs_1_bn_running_var, ema_neck_out_convs_1_bn_num_batches_tracked, ema_neck_out_convs_2_conv_weight, ema_neck_out_convs_2_bn_weight, ema_neck_out_convs_2_bn_bias, ema_neck_out_convs_2_bn_running_mean, ema_neck_out_convs_2_bn_running_var, ema_neck_out_convs_2_bn_num_batches_tracked, ema_bbox_head_multi_level_cls_convs_0_0_conv_weight, ema_bbox_head_multi_level_cls_convs_0_0_bn_weight, ema_bbox_head_multi_level_cls_convs_0_0_bn_bias, ema_bbox_head_multi_level_cls_convs_0_0_bn_running_mean, ema_bbox_head_multi_level_cls_convs_0_0_bn_running_var, ema_bbox_head_multi_level_cls_convs_0_0_bn_num_batches_tracked, ema_bbox_head_multi_level_cls_convs_0_1_conv_weight, ema_bbox_head_multi_level_cls_convs_0_1_bn_weight, ema_bbox_head_multi_level_cls_convs_0_1_bn_bias, ema_bbox_head_multi_level_cls_convs_0_1_bn_running_mean, ema_bbox_head_multi_level_cls_convs_0_1_bn_running_var, ema_bbox_head_multi_level_cls_convs_0_1_bn_num_batches_tracked, ema_bbox_head_multi_level_cls_convs_1_0_conv_weight, ema_bbox_head_multi_level_cls_convs_1_0_bn_weight, ema_bbox_head_multi_level_cls_convs_1_0_bn_bias, ema_bbox_head_multi_level_cls_convs_1_0_bn_running_mean, ema_bbox_head_multi_level_cls_convs_1_0_bn_running_var, ema_bbox_head_multi_level_cls_convs_1_0_bn_num_batches_tracked, ema_bbox_head_multi_level_cls_convs_1_1_conv_weight, ema_bbox_head_multi_level_cls_convs_1_1_bn_weight, ema_bbox_head_multi_level_cls_convs_1_1_bn_bias, ema_bbox_head_multi_level_cls_convs_1_1_bn_running_mean, ema_bbox_head_multi_level_cls_convs_1_1_bn_running_var, ema_bbox_head_multi_level_cls_convs_1_1_bn_num_batches_tracked, ema_bbox_head_multi_level_cls_convs_2_0_conv_weight, ema_bbox_head_multi_level_cls_convs_2_0_bn_weight, ema_bbox_head_multi_level_cls_convs_2_0_bn_bias, ema_bbox_head_multi_level_cls_convs_2_0_bn_running_mean, ema_bbox_head_multi_level_cls_convs_2_0_bn_running_var, ema_bbox_head_multi_level_cls_convs_2_0_bn_num_batches_tracked, ema_bbox_head_multi_level_cls_convs_2_1_conv_weight, ema_bbox_head_multi_level_cls_convs_2_1_bn_weight, ema_bbox_head_multi_level_cls_convs_2_1_bn_bias, ema_bbox_head_multi_level_cls_convs_2_1_bn_running_mean, ema_bbox_head_multi_level_cls_convs_2_1_bn_running_var, ema_bbox_head_multi_level_cls_convs_2_1_bn_num_batches_tracked, ema_bbox_head_multi_level_reg_convs_0_0_conv_weight, ema_bbox_head_multi_level_reg_convs_0_0_bn_weight, ema_bbox_head_multi_level_reg_convs_0_0_bn_bias, ema_bbox_head_multi_level_reg_convs_0_0_bn_running_mean, ema_bbox_head_multi_level_reg_convs_0_0_bn_running_var, ema_bbox_head_multi_level_reg_convs_0_0_bn_num_batches_tracked, ema_bbox_head_multi_level_reg_convs_0_1_conv_weight, ema_bbox_head_multi_level_reg_convs_0_1_bn_weight, ema_bbox_head_multi_level_reg_convs_0_1_bn_bias, ema_bbox_head_multi_level_reg_convs_0_1_bn_running_mean, ema_bbox_head_multi_level_reg_convs_0_1_bn_running_var, ema_bbox_head_multi_level_reg_convs_0_1_bn_num_batches_tracked, ema_bbox_head_multi_level_reg_convs_1_0_conv_weight, ema_bbox_head_multi_level_reg_convs_1_0_bn_weight, ema_bbox_head_multi_level_reg_convs_1_0_bn_bias, ema_bbox_head_multi_level_reg_convs_1_0_bn_running_mean, ema_bbox_head_multi_level_reg_convs_1_0_bn_running_var, ema_bbox_head_multi_level_reg_convs_1_0_bn_num_batches_tracked, ema_bbox_head_multi_level_reg_convs_1_1_conv_weight, ema_bbox_head_multi_level_reg_convs_1_1_bn_weight, ema_bbox_head_multi_level_reg_convs_1_1_bn_bias, ema_bbox_head_multi_level_reg_convs_1_1_bn_running_mean, ema_bbox_head_multi_level_reg_convs_1_1_bn_running_var, ema_bbox_head_multi_level_reg_convs_1_1_bn_num_batches_tracked, ema_bbox_head_multi_level_reg_convs_2_0_conv_weight, ema_bbox_head_multi_level_reg_convs_2_0_bn_weight, ema_bbox_head_multi_level_reg_convs_2_0_bn_bias, ema_bbox_head_multi_level_reg_convs_2_0_bn_running_mean, ema_bbox_head_multi_level_reg_convs_2_0_bn_running_var, ema_bbox_head_multi_level_reg_convs_2_0_bn_num_batches_tracked, ema_bbox_head_multi_level_reg_convs_2_1_conv_weight, ema_bbox_head_multi_level_reg_convs_2_1_bn_weight, ema_bbox_head_multi_level_reg_convs_2_1_bn_bias, ema_bbox_head_multi_level_reg_convs_2_1_bn_running_mean, ema_bbox_head_multi_level_reg_convs_2_1_bn_running_var, ema_bbox_head_multi_level_reg_convs_2_1_bn_num_batches_tracked, ema_bbox_head_multi_level_conv_cls_0_weight, ema_bbox_head_multi_level_conv_cls_0_bias, ema_bbox_head_multi_level_conv_cls_1_weight, ema_bbox_head_multi_level_conv_cls_1_bias, ema_bbox_head_multi_level_conv_cls_2_weight, ema_bbox_head_multi_level_conv_cls_2_bias, ema_bbox_head_multi_level_conv_reg_0_weight, ema_bbox_head_multi_level_conv_reg_0_bias, ema_bbox_head_multi_level_conv_reg_1_weight, ema_bbox_head_multi_level_conv_reg_1_bias, ema_bbox_head_multi_level_conv_reg_2_weight, ema_bbox_head_multi_level_conv_reg_2_bias, ema_bbox_head_multi_level_conv_obj_0_weight, ema_bbox_head_multi_level_conv_obj_0_bias, ema_bbox_head_multi_level_conv_obj_1_weight, ema_bbox_head_multi_level_conv_obj_1_bias, ema_bbox_head_multi_level_conv_obj_2_weight, ema_bbox_head_multi_level_conv_obj_2_bias
2022-07-21 15:27:59,890 - mmdeploy - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future. 2022-07-21 15:27:59,891 - mmdeploy - INFO - Export PyTorch model to ONNX: ./work-dir/end2end.onnx. /usr/local/lib/python3.8/dist-packages/mmdeploy/core/optimizers/function_marker.py:158: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! ys_shape = tuple(int(s) for s in ys.shape) /usr/local/lib/python3.8/dist-packages/mmdeploy/codebase/mmdet/core/post_processing/bbox_nms.py:259: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! dets, labels = TRTBatchedNMSop.apply(boxes, scores, int(scores.shape[-1]), /usr/local/lib/python3.8/dist-packages/mmdeploy/mmcv/ops/nms.py:178: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out_boxes = min(num_boxes, after_topk) 2022-07-21 15:28:02,365 - mmdeploy - INFO - Execute onnx optimize passes. 2022-07-21 15:28:02,365 - mmdeploy - WARNING - Can not optimize model, please build torchscipt extension. More details: https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/experimental/onnx_optimizer.md 2022-07-21 15:28:02,407 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx [2022-07-21 15:28:03.757] [mmdeploy] [info] [model.cpp:95] Register 'DirectoryModel' 2022-07-21 15:28:03,760 - mmdeploy - INFO - Start pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt in subprocess 2022-07-21 15:28:03,840 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /usr/local/lib/python3.8/dist-packages/mmdeploy/lib/libmmdeploy_tensorrt_ops.so [07/21/2022-15:28:04] [TRT] [I] [MemUsageChange] Init CUDA: CPU +324, GPU +0, now: CPU 396, GPU 594 (MiB) [07/21/2022-15:28:04] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +330, GPU +104, now: CPU 745, GPU 698 (MiB) [07/21/2022-15:28:04] [TRT] [W] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [07/21/2022-15:28:04] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped [07/21/2022-15:28:04] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped [07/21/2022-15:28:04] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped [07/21/2022-15:28:04] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin. [07/21/2022-15:28:04] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace: [07/21/2022-15:28:04] [TRT] [I] Successfully created plugin: TRTBatchedNMS [07/21/2022-15:28:05] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +639, GPU +266, now: CPU 1431, GPU 964 (MiB) [07/21/2022-15:28:05] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +363, GPU +260, now: CPU 1794, GPU 1224 (MiB) [07/21/2022-15:28:05] [TRT] [W] TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.0.5 [07/21/2022-15:28:05] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored. [07/21/2022-15:28:37] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes. [07/21/2022-15:28:54] [TRT] [W] Myelin graph with multiple dynamic values may have poor performance if they differ. Dynamic values are: [07/21/2022-15:28:54] [TRT] [W] (# 4 (RESHAPE 1 3 (# 2 E0) 2 (# 3 (SHAPE input)) | 1 3 (# 2 E0) 2 -1 2 zeroIsPlaceholder)) where E0=(RESHAPE 1 3 (# 2 (SHAPE input)) (# 3 (SHAPE input)) | 1 3 -1 2 (# 3 (SHAPE input)) zeroIsPlaceholder) [07/21/2022-15:28:54] [TRT] [W] (# 2 (RESHAPE 1 3 (# 2 (SHAPE input)) (# 3 (SHAPE input)) | 1 3 -1 2 (# 3 (SHAPE input)) zeroIsPlaceholder)) [07/21/2022-15:29:02] [TRT] [I] Detected 1 inputs and 2 output network tensors. [07/21/2022-15:29:02] [TRT] [W] Myelin graph with multiple dynamic values may have poor performance if they differ. Dynamic values are: [07/21/2022-15:29:02] [TRT] [W] (# 4 (RESHAPE 1 3 (# 2 E0) 2 (# 3 (SHAPE input)) | 1 3 (# 2 E0) 2 -1 2 zeroIsPlaceholder)) where E0=(RESHAPE 1 3 (# 2 (SHAPE input)) (# 3 (SHAPE input)) | 1 3 -1 2 (# 3 (SHAPE input)) zeroIsPlaceholder) [07/21/2022-15:29:02] [TRT] [W] (# 2 (RESHAPE 1 3 (# 2 (SHAPE input)) (# 3 (SHAPE input)) | 1 3 -1 2 (# 3 (SHAPE input)) zeroIsPlaceholder)) [07/21/2022-15:29:04] [TRT] [I] Total Host Persistent Memory: 167664 [07/21/2022-15:29:04] [TRT] [I] Total Device Persistent Memory: 7116288 [07/21/2022-15:29:04] [TRT] [I] Total Scratch Memory: 8026624 [07/21/2022-15:29:04] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 7 MiB, GPU 601 MiB [07/21/2022-15:29:04] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 63.8431ms to assign 12 blocks to 241 nodes requiring 119056900 bytes. [07/21/2022-15:29:04] [TRT] [I] Total Activation Memory: 119056900 [07/21/2022-15:29:04] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +8, now: CPU 4393, GPU 2809 (MiB) [07/21/2022-15:29:04] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 4393, GPU 2817 (MiB) [07/21/2022-15:29:04] [TRT] [W] TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.0.5 [07/21/2022-15:29:04] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +4, GPU +31, now: CPU 4, GPU 31 (MiB) [07/21/2022-15:29:04] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. [07/21/2022-15:29:04] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. 2022-07-21 15:29:04,298 - mmdeploy - INFO - Finish pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt 2022-07-21 15:29:05,135 - mmdeploy - WARNING - "visualize_model" has been skipped may be because it's running on a headless device. 2022-07-21 15:29:05,135 - mmdeploy - INFO - All process success.
- CMakeLists.txt and build command
cmake -DMMDeploy_DIR=/workspace/MOT/open-mmlab/mmdeploy-0.6.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0/sdk/lib/cmake/MMDeploy/ ..
cmake_minimum_required( VERSION 3.1 )
project(Detector)
set (CMAKE_CXX_STANDARD 14)
find_package(MMDeploy REQUIRED)
add_library(${PROJECT_NAME} SHARED
src/gw_detector.cpp
)
# Load MMDeploy modules
mmdeploy_load_static(${PROJECT_NAME} MMDeployStaticModules)
mmdeploy_load_dynamic(${PROJECT_NAME} MMDeployDynamicModules)
target_link_libraries(${PROJECT_NAME}
PUBLIC
GwMOTbase
Eigen3::Eigen
${OpenCV_LIBS}
MMDeployLibs
)
target_include_directories(${PROJECT_NAME}
PUBLIC
include
)
c++ code:
constexpr int device_id = 0;
// get mmdeploy model path of faster r-cnn
const std::string device_name = "cuda";
// const std::string model_path = "/workspace/MOT/mot/module/Detector/model/yolox_tiny.engine";
const std::string model_path = "/workspace/MOT/open-mmlab/mmdeploy/work-dir/end2end.engine";
int status{};
status = mmdeploy_detector_create_by_path(model_path.c_str(), device_name.c_str(), 0, &detector_);
if (status != MM_SUCCESS) {
std::cout << "failed to create detector, code: " << (int)status << std::endl;
}
Error log:
[2022-07-21 15:32:09.835] [mmdeploy] [info] [model.cpp:95] Register 'DirectoryModel' Number of video frames> [2022-07-21 15:32:09.835] [mmdeploy] [info] [model.cpp:95] Register 'DirectoryModel' Number of video frames: 203 Number of annotations: 3521 imganns size: 203 [2022-07-21 15:32:10.318] [mmdeploy] [error] [model.cpp:44] no ModelImpl can read sdk_model /workspace/MOT/open-mmlab/mmdeploy/work-dir/end2end.engine [2022-07-21 15:32:10.318] [mmdeploy] [error] [model.cpp:15] load model failed. Its file path is '/workspace/MOT/open-mmlab/mmdeploy/work-dir/end2end.engine' [2022-07-21 15:32:10.318] [mmdeploy] [error] [model.cpp:20] failed to create model: not supported (2) @ /home/PJLAB/lvhan/Documents/projects/mmdeploy-release/mmdeploy/csrc/mmdeploy/core/model.cpp:45 failed to create detector, code: 6: 203 Number of annotations: 3521 imganns size: 203 [2022-07-21 15:32:10.318] [mmdeploy] [error] [model.cpp:44] no ModelImpl can read sdk_model /workspace/MOT/open-mmlab/mmdeploy/work-dir/end2end.engine [2022-07-21 15:32:10.318] [mmdeploy] [error] [model.cpp:15] load model failed. Its file path is '/workspace/MOT/open-mmlab/mmdeploy/work-dir/end2end.engine' [2022-07-21 15:32:10.318] [mmdeploy] [error] [model.cpp:20] failed to create model: not supported (2) @ /home/PJLAB/lvhan/Documents/projects/mmdeploy-release/mmdeploy/csrc/mmdeploy/core/model.cpp:45 failed to create detector, code: 6
I find the root cause. After tracing mmDeploy code, I find that I misunderstand the parameter model_path
of function mmdeploy_detector_create_by_path
, it should be a path include the converted model files, not the full engine path.
Example:
Wrong case: const std::string model_path = "/workspace/open-mmlab/mmdeploy/work-dir/end2end.engine";
Correct case: const std::string model_path = "/workspace/open-mmlab/mmdeploy/work-dir/";
When I call the function mmdeploy_detector_create_by_path, setting model_path by the ONNX model path, a problem occured: no ModelImpl can read sdk_model. Is model_path not the ONNX model path?What the model_path should be?