triple-Mu / YOLOv8-TensorRT

YOLOv8 using TensorRT accelerate !
MIT License
1.23k stars 210 forks source link

why not use --device cuda when run ‘python3 export-det.py’ export onnx in jetson.md ? #232

Open WeisonWEileen opened 1 week ago

WeisonWEileen commented 1 week ago

why not use --device version when run ‘python3 export-det.py’ export onnx in jetson.md file?

WeisonWEileen commented 1 week ago

And could you please explain why we use PC to export onnx using pc and tensor rt by jetson?

triple-Mu commented 1 day ago

Onnx a is cross cross-platform model definition so that we can export it on pc and use it on jetson. But tensorrt is no cross cross-platform.