marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

"WARNING: Number of classes mismatch, make sure to set num-detected-classes=8396 in config_infer file" on Yolov8s_onnx on Jetson Orin AGX 64 #491

Open salvadorscardua opened 7 months ago

salvadorscardua commented 7 months ago

I am testing the config_infer_primary_yoloV8_onnx.txt with Yolov8s.pt (converted to onnx) and i aways received this warning, and the bbox doesn't draw.

i am running the command: deepstream-app -c deepstream_app_config.txt

This is my config_files


config_infer_primary_yoloV8_onnx.txt [property] gpu-id=0

net-scale-factor=0.0039215697906911373

model-color-format=0

onnx-file=yolov8s.onnx model-engine-file=yolov8s.onnx_b1_gpu0_fp16.engine

int8-calib-file=calib.table

labelfile-path=labels.txt

batch-size=1

network-mode=2 num-detected-classes=80 interval=0 gie-unique-id=1 process-mode=1 network-type=0 cluster-mode=2 maintain-aspect-ratio=1 symmetric-padding=1 parse-bbox-func-name=NvDsInferParse_YOLOV8_ONNX custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all] nms-iou-threshold=0.45 pre-cluster-threshold=0.25 topk=300


deepstream_app_config.txt

[application] enable-perf-measurement=1 perf-measurement-interval-sec=5

[tiled-display] enable=1 rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0

[source0] enable=1 type=3 uri=file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1/1.h264 num-sources=1 gpu-id=0 cudadec-memtype=0

[sink0] enable=1 type=2 sync=0 gpu-id=0 nvbuf-memory-type=0

[osd] enable=1 gpu-id=0 border-width=5 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0

[streammux] gpu-id=0 live-source=0 batch-size=1 batched-push-timeout=40000 width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0

[primary-gie] enable=1 gpu-id=0 gie-unique-id=1 nvbuf-memory-type=0 config-file=config_infer_primary_yoloV8_onnx.txt

[tests] file-loop=0


This is the initial trace:

0:00:02.381380161 38868 0xaaaad700a6d0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialized trt engine from :/home/memoriacam/DeepStream-Yolo/yolov8s.onnx_b1_gpu0_fp16.engine WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: [Implicit Engine Info]: layers num: 4 0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

0:00:02.566483541 38868 0xaaaad700a6d0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() [UID = 1]: Use deserialized engine model: /home/memoriacam/DeepStream-Yolo/yolov8s.onnx_b1_gpu0_fp16.engine 0:00:02.596228992 38868 0xaaaad700a6d0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/home/memoriacam/DeepStream-Yolo/config_infer_primary_yoloV8_onnx.txt sucessfully

Runtime commands: h: Print this help q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window.

** INFO: : Pipeline ready

Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NvMMLiteBlockCreate : Block : BlockType = 261 ** INFO: : Pipeline running

mr-mainak commented 7 months ago

I am testing the config_infer_primary_yoloV8_onnx.txt with Yolov8s.pt (converted to onnx) and i aways received this warning, and the bbox doesn't draw.

i am running the command: deepstream-app -c deepstream_app_config.txt

This is my config_files

config_infer_primary_yoloV8_onnx.txt [property] gpu-id=0 #net-scale-factor=0.0039215697906911373 #model-color-format=0 onnx-file=yolov8s.onnx model-engine-file=yolov8s.onnx_b1_gpu0_fp16.engine #int8-calib-file=calib.table labelfile-path=labels.txt #batch-size=1 network-mode=2 num-detected-classes=80 interval=0 gie-unique-id=1 process-mode=1 network-type=0 cluster-mode=2 maintain-aspect-ratio=1 symmetric-padding=1 parse-bbox-func-name=NvDsInferParse_YOLOV8_ONNX custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all] nms-iou-threshold=0.45 pre-cluster-threshold=0.25 topk=300

deepstream_app_config.txt

[application] enable-perf-measurement=1 perf-measurement-interval-sec=5

[tiled-display] enable=1 rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0

[source0] enable=1 type=3 uri=file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1/1.h264 num-sources=1 gpu-id=0 cudadec-memtype=0

[sink0] enable=1 type=2 sync=0 gpu-id=0 nvbuf-memory-type=0

[osd] enable=1 gpu-id=0 border-width=5 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0

[streammux] gpu-id=0 live-source=0 batch-size=1 batched-push-timeout=40000 width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0

[primary-gie] enable=1 gpu-id=0 gie-unique-id=1 nvbuf-memory-type=0 config-file=config_infer_primary_yoloV8_onnx.txt

[tests] file-loop=0

This is the initial trace:

0:00:02.381380161 38868 0xaaaad700a6d0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialized trt engine from :/home/memoriacam/DeepStream-Yolo/yolov8s.onnx_b1_gpu0_fp16.engine WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: [Implicit Engine Info]: layers num: 4 0 INPUT kFLOAT input 3x640x640 1 OUTPUT kFLOAT boxes 8400x4 2 OUTPUT kFLOAT scores 8400x1 3 OUTPUT kFLOAT classes 8400x1

0:00:02.566483541 38868 0xaaaad700a6d0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() [UID = 1]: Use deserialized engine model: /home/memoriacam/DeepStream-Yolo/yolov8s.onnx_b1_gpu0_fp16.engine 0:00:02.596228992 38868 0xaaaad700a6d0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:/home/memoriacam/DeepStream-Yolo/config_infer_primary_yoloV8_onnx.txt sucessfully

Runtime commands: h: Print this help q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window.

** INFO: : Pipeline ready

Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NvMMLiteBlockCreate : Block : BlockType = 261 ** INFO: : Pipeline running

U have set num-detected-classes=80 . Set this to the number of classes that your yolo model is trained with.

salvadorscardua commented 7 months ago

I already set, like you can see on my config_infer_primary_yoloV8_onnx.txt

I think that maybe i have some issue on converting the model to ONNX

marcoslucianops commented 7 months ago

Please update the files to the new file in this repo. You are using the old files. Please regenerate the ONNX model and engine again.