Open jhunmk29 opened 11 months ago
You can skip the onnxsim
. It's only to simplify (using the --simplify
) the model during the export process.
@marcoslucianops
thanks, but now I have another problem, I export oonx model, use yolov5-version7, and I input the command 'deepstream-app -c deepstream_app_config.txt', but App run failed.
nvidia@nvidia-desktop:~/yolov5-tensorrt/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt
Using winsys: x11
ERROR: Deserialize engine failed because file path: /home/nvidia/yolov5-tensorrt/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:01.921716540 16460 0x39e64440 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/nvidia/yolov5-tensorrt/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:01.921903387 16460 0x39e64440 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/yolov5-tensorrt/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.921946779 16460 0x39e64440 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ModelImporter.cpp:720: While parsing node number 141 [Resize -> "onnx::Concat_271"]:
ModelImporter.cpp:721: --- Begin node ---
ModelImporter.cpp:722: input: "onnx::Resize_266"
input: "onnx::Resize_270"
input: "onnx::Resize_445"
output: "onnx::Concat_271"
name: "Resize_141"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "asymmetric"
type: STRING
}
attribute {
name: "cubic_coeff_a"
f: -0.75
type: FLOAT
}
attribute {
name: "mode"
s: "nearest"
type: STRING
}
attribute {
name: "nearest_mode"
s: "floor"
type: STRING
}
ModelImporter.cpp:723: --- End node ---
ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:3422 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
Could not parse the ONNX model
Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.140975759 16460 0x39e64440 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:02.141073903 16460 0x39e64440 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:02.141121454 16460 0x39e64440 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:02.141196942 16460 0x39e64440 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.141227918 16460 0x39e64440 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /home/nvidia/yolov5-tensorrt/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/nvidia/yolov5-tensorrt/DeepStream-Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
How can I solve this problem?
Add --opset 12
in the export command and try again.
@marcoslucianops
I'm its not useful. Same error. I will show u my version details.
My yolov5 version is 7.
Are you using a custom model?
Are you using a custom model?
No, I use the yolov5s.pt to yolov5s.onnx.
@marcoslucianops
I update to jetpac 5.0.1 dp, and it's ok. But when I run deepstream, it dont show the detection result, why.
this is my configuration file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov5s.onnx
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
And in deepstream folder, i have the label file-"labels.txt". The yolov5s model is not custom
@jhunmk29 Hi. have you solved the problem, when I run deepstream, it dont also show the detection result.
@PigletPh no, but if i solve it, i will tell u
What is your PyTorch version?
@marcoslucianops
pytorch 2.0 torchvision:0.16
@marcoslucianops I have used yolov5 detect.py to detect target in sample video (python3 detect.py --weights yolov5s.pt --source sample_1080p_h265.mp4 --device 0), and it can detect target. I think the pytorch environment is ok.
Can you try with PyTorch < 2.0?
I could be a problem in the new PyTouch when converting the layers to ONNX. That's why I asked to use previous version.
@jhunmk29 I think i have the same problem as you.Have you solve the problem,i also can't see the bounding box on screen
@jhunmk29 Pytorch vision 1.10.0 ,torchvision0.11.1 on my jetson, I think that is relatively low
I had the same problem as you and I have solved it
Failed to build onnxsim or no module named 'onnxsim'
Method: Directly pip3 install onnxsim
Here you need to pay attention to your cmake version, the old version 1.18.3 does not work, I upgraded cmake to 3.27.5 can be, directpip3 install cmake == 3.27.5
You can skip the onnxsim installation. It's just used for simplify the model using the --simplify
arg in the export file.
Hi, I read the tutorial 'YOLOv5 usage'. I input the cammand
pip3 install onnxsim
, but its wrong.How can i solve this problem?