marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

Error running yolo8 onnx file #465

Closed mgabell closed 9 months ago

mgabell commented 9 months ago

Hi,

I have followed the instruction creating the Yolo8l.pt by ultralytics. I try their prediction and it seems to do a good job finding the objects and classifying them. I use your guid to convert to onnx and run the same model on a film made from the images I used with ultralytics prediction. I get this error:

****PERF: {'stream0': 29.98, 'stream1': 29.98, 'stream2': 29.98}

**PERF: {'stream0': 29.79, 'stream1': 29.79, 'stream2': 29.79}

Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink: There may be a timestamping problem, or this computer is too slow. Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink: There may be a timestamping problem, or this computer is too slow. Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink: There may be a timestamping problem, or this computer is too slow.**

I have no output in the output folder which I should, since the images all contain objects to find. The images has been selected by a resnet50 model trained on the same dataset.

How can I get rid of the warning? Is it too large model?

mgabell commented 9 months ago

More information to the issue with non working inference:

Now playing...
0 :  file:///home/aiadmin/Development/deepstream-yolov8-evaluation/test_set.mp4
Starting pipeline 

0:00:03.199459543 17160      0x95df320 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/aiadmin/Development/deepstream-yolov8-evaluation/model_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           8400x4          
2   OUTPUT kFLOAT scores          8400x1          
3   OUTPUT kFLOAT classes         8400x1          

0:00:03.366206603 17160      0x95df320 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/aiadmin/Development/deepstream-yolov8-evaluation/model_b1_gpu0_fp16.engine
0:00:03.394881851 17160      0x95df320 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:config_infer_primary_yoloV8.txt sucessfully
Decodebin child added: source 

pening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 In cb_newpad

gstname= video/x-raw features= <Gst.CapsFeatures object at 0xffff8f2bdb20 (GstCapsFeatures at 0x26e11ee0)>

**PERF: {'stream0': 33.69}

mgabell commented 9 months ago

This is solved. The problem is the version of pytorch, torchvision and protobuf. Upgrading to the latest pytorch and torchvision works well. However. There is a problem with onnx export. Running the onnx-export py requires one version of protobuf whereas the trianing of the model with ultralytics requires another. So I bounce between protobuf 3.19.1 and the latest version.