Open MoussaGRICHE opened 9 months ago
What's your tensorrt version in jetson? Could you please upgrade it to 8.5.1 in jetpack 5.0
The tensorrt version is 8.2.1
I can't upgrade to jetpack 5 because I am using Jetson TX2 NX.
Could I upgrade tensorrt without upgrading the jetpack?
The tensorrt version is 8.2.1
I can't upgrade to jetpack 5 because I am using Jetson TX2 NX.
Could I upgrade tensorrt without upgrading the jetpack?
Do you have further questions? Sorry for replying to you so late.
well, i meet the same problem once again, maybe because my Jetpack version is 4.6?
well, i meet the same problem once again, maybe because my Jetpack version is 4.6?
Suggest using the newest jetpack.
Hello,
I have a yolov8 model that I converted to engine.
With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine
But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16
I am using c++ program on Jetson TX2 NX
I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good.
Do you have an idea why I get this problem?
Thank you.
why do u use with fp16 on jetson nano ,accuracy is negative ?
Hello, I have a yolov8 model that I converted to engine. With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 I am using c++ program on Jetson TX2 NX I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good. Do you have an idea why I get this problem? Thank you.
why do u use with fp16 on jetson nano ,accuracy is negative ?
I hope to perform model inference with FP16 precision to achieve faster inference speed.
Hello, I have a yolov8 model that I converted to engine. With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 I am using c++ program on Jetson TX2 NX I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good. Do you have an idea why I get this problem? Thank you.
why do u use with fp16 on jetson nano ,accuracy is negative ?
This is due to a problem with the lower version of tensorrt.
whta ? i think it is file build.py have problem. And i try two methods: trtxec and build.py but accuracy is negative
Im convert model yolov8 onnx without end2ned use fp16, my accuracy is good but i convert model yolov8 end2end, accuracy is negative
Hello,
I have a yolov8 model that I converted to engine.
With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec \ --onnx=yolov8s.onnx \ --saveEngine=yolov8s.engine
But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec \ --onnx=yolov8s.onnx \ --saveEngine=yolov8s.engine \ --fp16
I am using c++ program on Jetson TX2 NX
I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good.
Do you have an idea why I get this problem?
Thank you.