marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 356 forks source link

YoloV8 export still does not work #470

Closed mgabell closed 10 months ago

mgabell commented 12 months ago

The export seems to work. Training goes fine. The prediction with ultralytics works fine. But deepstream v8 does not create boxes. There must be an issue still with the export. I use the export_yolov8.py and run as:

python3 export_yolov8.py -w myYoloModel.pt --dynamic.

This will generate a onnx model (very fast). But using it does not work. No erros, the video plays, but does not produce any defects. I sometimes get a model working. So what will it depend on? epochs?

WARNING: Deserialize engine failed because file path: /home/aiadmin/Development/deepstream-yolov8-evaluation/model_b1_gpu0_fp16.engine open error
0:00:03.580736845  7375     0x31891b20 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/aiadmin/Development/deepstream-yolov8-evaluation/model_b1_gpu0_fp16.engine failed
0:00:03.754231754  7375     0x31891b20 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/aiadmin/Development/deepstream-yolov8-evaluation/model_b1_gpu0_fp16.engine failed, try rebuild
0:00:03.754333067  7375     0x31891b20 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

Building complete

0:04:08.014390407  7375     0x31891b20 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /home/aiadmin/Development/deepstream-yolov8-evaluation/model_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           8400x4          
2   OUTPUT kFLOAT scores          8400x1          
3   OUTPUT kFLOAT classes         8400x1          

0:04:08.261001291  7375     0x31891b20 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:config_infer_primary_yoloV8.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

**PERF:  {'stream0': 0.0} 
Decodebin child added: qtdemux0 
Decodebin child added: multiqueue0 
Decodebin child added: h264parse0 
Decodebin child added: capsfilter0 
Decodebin child added: nvv4l2decoder0 

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff6b680ca0 (GstCapsFeatures at 0x8a1cc6e0)>
mgabell commented 10 months ago

It works now...

xxiMiaxx commented 10 months ago

It works now...

okay but what was the problem and how did you solve it.

mgabell commented 10 months ago

Sorry to be honest I dont know. I changed to yolov5. Installed required packages. I then created a trained model with ultralytics. I also made sure to follow marcos instruction to the letter. I used HIS export tool and then remembered to move the labels.txt and onnx model to the correct folder for deepstream. Then I deleted the engine file and ensured that the name in the config file and the engine file was correct.

See if that works. I have work my "#¤%"# off to get this work I will do what I can to assist.