triple-Mu / YOLOv8-TensorRT

YOLOv8 using TensorRT accelerate !
MIT License
1.39k stars 240 forks source link

FP16 engine does not detect object #194

Open MoussaGRICHE opened 9 months ago

MoussaGRICHE commented 9 months ago

Hello,

I have a yolov8 model that I converted to engine.

With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec \ --onnx=yolov8s.onnx \ --saveEngine=yolov8s.engine

But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec \ --onnx=yolov8s.onnx \ --saveEngine=yolov8s.engine \ --fp16

I am using c++ program on Jetson TX2 NX

I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good.

Do you have an idea why I get this problem?

Thank you.

triple-Mu commented 8 months ago

What's your tensorrt version in jetson? Could you please upgrade it to 8.5.1 in jetpack 5.0

MoussaGRICHE commented 8 months ago

The tensorrt version is 8.2.1

I can't upgrade to jetpack 5 because I am using Jetson TX2 NX.

Could I upgrade tensorrt without upgrading the jetpack?

triple-Mu commented 6 months ago

The tensorrt version is 8.2.1

I can't upgrade to jetpack 5 because I am using Jetson TX2 NX.

Could I upgrade tensorrt without upgrading the jetpack?

Do you have further questions? Sorry for replying to you so late.

OPlincn commented 4 months ago

well, i meet the same problem once again, maybe because my Jetpack version is 4.6?

triple-Mu commented 4 months ago

well, i meet the same problem once again, maybe because my Jetpack version is 4.6?

Suggest using the newest jetpack.

duong0411 commented 2 months ago

Hello,

I have a yolov8 model that I converted to engine.

With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine

But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16

I am using c++ program on Jetson TX2 NX

I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good.

Do you have an idea why I get this problem?

Thank you.

why do u use with fp16 on jetson nano ,accuracy is negative ?

OPlincn commented 2 months ago

Hello, I have a yolov8 model that I converted to engine. With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 I am using c++ program on Jetson TX2 NX I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good. Do you have an idea why I get this problem? Thank you.

why do u use with fp16 on jetson nano ,accuracy is negative ?

I hope to perform model inference with FP16 precision to achieve faster inference speed.

triple-Mu commented 2 months ago

Hello, I have a yolov8 model that I converted to engine. With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 I am using c++ program on Jetson TX2 NX I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good. Do you have an idea why I get this problem? Thank you.

why do u use with fp16 on jetson nano ,accuracy is negative ?

This is due to a problem with the lower version of tensorrt.

duong0411 commented 2 months ago

whta ? i think it is file build.py have problem. And i try two methods: trtxec and build.py but accuracy is negative

duong0411 commented 2 months ago

Im convert model yolov8 onnx without end2ned use fp16, my accuracy is good but i convert model yolov8 end2end, accuracy is negative