Open pullmyleg opened 6 months ago
您好,感谢您为回购所做的所有工作@marcoslucianops。我在使用 --simplify 标签为早期版本的 deepstream (6.0.1) 导出 RT-DETR pytorch 模型时注意到一个问题。
具体来说,错误 [ShapeInferenceError] 推断形状和现有形状的等级不同:(3) 与 (0) 表示乘法(“Mul”)运算中涉及的张量的预期维度不匹配。
看起来这里有同样的问题:onnx/onnx#3565,它也成功生成了 .onnx 文件。
问题:
- 该模型是否可以支持旧版本 Deepstream(例如 6.0.1)的简化标志?
- 你能提出修复建议吗@marcoslucianops?
错误:
$ python3 export_rtdetr_pytorch.py -w output/rtdetr_r18vd_6x_coco/checkpoint0013.pth -c configs/rtdetr/rtdetr_r18vd_6x_coco.yml --simplify -s 640 Starting: output/rtdetr_r18vd_6x_coco/checkpoint0013.pth Opening RT-DETR PyTorch model Load PResNet18 state_dict Exporting the model to ONNX ============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== Simplifying the ONNX model Traceback (most recent call last): File "export_rtdetr_pytorch.py", line 110, in <module> sys.exit(main(args)) File "export_rtdetr_pytorch.py", line 83, in main model_onnx, _ = onnxsim.simplify(model_onnx) File "/home/inviol/.virtualenvs/RTDETR_pytorch/lib/python3.8/site-packages/onnxsim/onnx_simplifier.py", line 199, in simplify model_opt_bytes = C.simplify( onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] (op_type:Mul, node name: /1/Mul): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (0)
当使用 --dynamic 标志导出更高版本时没有问题时工作:
$ python3 export_rtdetr_pytorch.py -w output/rtdetr_r18vd_6x_coco/checkpoint0013.pth -c configs/rtdetr/rtdetr_r18vd_6x_coco.yml --dynamic Starting: output/rtdetr_r18vd_6x_coco/checkpoint0013.pth Opening RT-DETR PyTorch model Load PResNet18 state_dict Exporting the model to ONNX ============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== Done: checkpoint0013.onnx
Hello i have the same question with U , has you slove this problem?
Hi @IronmanVsThanos, post export the model still converts runs on deepstream X86 succesfully but will not run on a jetson unit.
Will post more detail on this later. But the exported model with error still works.
你好@IronmanVsThanos,导出后模型仍可成功在 deepstream X86 上运行,但不会在 jetson 设备上运行。
稍后将发布更多详细信息。但导出的模型有错误仍然有效。
thank u for your replay,When I use the comand "python3 export_rtdetr_pytorch.py -w output/rtdetr_r18vd_6x_coco/checkpoint0013.pth -c configs/rtdetr/rtdetr_r18vd_6x_coco.yml --dynamic --simplify" , i can create the onnx model successfull,but it still not work on deepstream 6.0.1. and the following error occurs.
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 444 [GridSample -> "/0/decoder/decoder/layers.0/cross_attn/GridSample_output_0"]: ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node --- ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/decoder/decoder/layers.0/cross_attn/Reshape_5_output_0" input: "/0/decoder/decoder/layers.0/cross_attn/Reshape_6_output_0" output: "/0/decoder/decoder/layers.0/cross_attn/GridSample_output_0" name: "/0/decoder/decoder/layers.0/cross_attn/GridSample" op_type: "GridSample" attribute { name: "align_corners" i: 0 type: INT } attribute { name: "mode" s: "bilinear" type: STRING } attribute { name: "padding_mode" s: "zeros" type: STRING } ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter: [8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?" ERROR: Failed to parse onnx file ERROR: failed to build network since parsing model errors.
Your export command looks incorrect. --dynamic (this is for later versions of deepstream) and --simplify (older versions) is not right I don't think. Try:
python3 export_rtdetr_pytorch.py -w output/rtdetr_r18vd_6x_coco/checkpoint0013.pth -c configs/rtdetr/rtdetr_r18vd_6x_coco.yml --simplify -s 640
Thank U for your reply!As you suggested, I used the command "python3 export_rtdetr_pytorch.py -w output/rtdetr_r18vd_6x_coco/checkpoint0013.pth -c configs/rtdetr/rtdetr_r18vd_6x_coco.yml --simplify -s 640" to export onnx on a server that trains rt-detr, but I get the following error:
`Opening RT-DETR PyTorch model
Load PResNet18 state_dict
Exporting the model to ONNX ============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Simplifying the ONNX model
Traceback (most recent call last):
File "export_rtdetr_pytorch.py", line 110, in
Yes, I get the same error but the model still runs in deepstream and converts (x86 only). It does not work on a Jetson unit.
Do you have the same environment for x86 and deepstream?
when u use the onnx model on jetson unit,is that the error like the follow?
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 444 [GridSample -> "/0/decoder/decoder/layers.0/cross_attn/GridSample_output_0"]: ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node --- ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/decoder/decoder/layers.0/cross_attn/Reshape_5_output_0" input: "/0/decoder/decoder/layers.0/cross_attn/Reshape_6_output_0" output: "/0/decoder/decoder/layers.0/cross_attn/GridSample_output_0" name: "/0/decoder/decoder/layers.0/cross_attn/GridSample" op_type: "GridSample" attribute { name: "align_corners" i: 0 type: INT } attribute { name: "mode" s: "bilinear" type: STRING } attribute { name: "padding_mode" s: "zeros" type: STRING } ERROR: [TRT]: ModelImporter.cpp:776: --- End node --- ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter: [8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?" ERROR: Failed to parse onnx file ERROR: failed to build network since parsing model errors.
by the way,have U tried the Ultrlytics rt-detr?
Do you have the same environment for x86 and deepstream?
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/decoder/decoder/layers.0/cross_attn/Reshape_5_output_0"
input: "/0/decoder/decoder/layers.0/cross_attn/Reshape_6_output_0"
output: "/0/decoder/decoder/layers.0/cross_attn/GridSample_output_0"
name: "/0/decoder/decoder/layers.0/cross_attn/GridSample"
op_type: "GridSample"
attribute {
name: "align_corners"
i: 0
type: INT
}
attribute {
name: "mode"
s: "bilinear"
type: STRING
}
attribute {
name: "padding_mode"
s: "zeros"
type: STRING
} ```
Smaller options is available: https://github.com/orgs/ultralytics/discussions/2545
1.Have not tried Ultralytics, but they only support L & XL models. Have you tried Ultralytics? I tried the L model with onnx opset version 12,but I get the following error:
(yolov8) hx@FitServer-R4200-V5:/mnt/sda1/Deep_learning/code/yolov8/ultralytics$ python3 export_rtdetr_ultralytics.py -w rtdetr-l.pt --simplify
Starting: rtdetr-l.pt Opening RT-DETR Ultralytics model
Ultralytics YOLOv8.0.155 Python-3.8.18 torch-1.9.1+cu111 CPU (Intel Xeon Gold 6148 2.40GHz) rt-detr-l summary: 494 layers, 32148140 parameters, 0 gradients
Creating labels.txt file
Exporting the model to ONNX
Traceback (most recent call last):
File "export_rtdetr_ultralytics.py", line 124, in
the author of RT-DETR told me that we need to upgrade our TensorRT version >= 8.5.1 to support some operator in rt-detr
Thanks for sharing @IronmanVsThanos!
I think we will need to use a later version of Deepstream. x86 uses a later version of cuda (11.4) which is why it is working and Jetson with Cuda 10.2 won't support this.
yes,I agree with you. 6.0.1 is too late,by the way,the author of RT-DETR told me that they will release RT-DETRV2 and this version does not rely much on the Tensor RT version
Thanks exciting. @IronmanVsThanos did he mention an estimated release date?
He did not mention a specific time, but said that it would be in the near future.
Hi, thanks for all the work with the repo @marcoslucianops . I noticed an issue when exporting a RT-DETR pytorch model for earlier versions of deepstream (6.0.1) with the --simplify tag.
Specifically, the error [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (0) indicates a mismatch in the expected dimensions of the tensors involved in a multiplication ('Mul') operation.
It looks like the same issue here: https://github.com/onnx/onnx/issues/3565 and it also does successfully produce a .onnx file.
Questions:
1) Can this model support the simplify flag for older versions of deepstream such as 6.0.1? 2) Are you able to suggest a fix @marcoslucianops?
Error:
Working when there is no issue when exporting with --dynamic flag for later versions: