triple-Mu / YOLOv8-TensorRT

YOLOv8 using TensorRT accelerate !
MIT License
1.23k stars 210 forks source link

python engine is faster #199

Closed JuZi233 closed 4 months ago

JuZi233 commented 4 months ago

C++ infer: 7ms
python infer: 3ms

When I successfully generated the dll file of yolov8 and made it available for python to call, I found that C++ reasoning and letterBox took about 7ms, while python only needed 3ms, I successfully wasted a day's time tossing this. image

JuZi233 commented 4 months ago

i'm sorry ,forgot use 256x256 engine on c++: c++ 256x256 : 2ms python 256x256: 3ms c++ win