marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 356 forks source link

Accuracy Issue on converting model to ONNX #497

Open agarwalkunal12 opened 9 months ago

agarwalkunal12 commented 9 months ago

There is a significant accuracy drop when running the Pytorch model vs when converting the model to ONNX to run it with Deepstream 6.2

Had the same issue with YoloV5 and YoloV8. Also saw this warning when generating the engine file:

WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output. WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped

agarwalkunal12 commented 9 months ago

@marcoslucianops Please have a look if possible. Could this log be a issue for the accuracy drop as reported by others? Previously using DS6.0, hadn't experienced a drop in accuracy like this.

statscol commented 9 months ago

any workaround? Having the same issue converting using weights from a fine-tuned yolov8 in DS6.1

agarwalkunal12 commented 9 months ago

Not yet.

HGD-ai commented 8 months ago

yolov5在DS6.1中也遇到了同样的问题。@marcoslucianops

dioptrique commented 6 months ago

Same issue here, anyone able to resolve this? @marcoslucianops. Many of the detections are missing when running deepstream-app

dioptrique commented 6 months ago

Here is the exact same issue posted by another user: https://github.com/marcoslucianops/DeepStream-Yolo/issues/520

abdulazizm commented 5 months ago

@dioptrique @HGD-ai @agarwalkunal12 Any progress you made on this accuracy improvement?