yasenh / libtorch-yolov5

A LibTorch inference implementation of the yolov5
MIT License
372 stars 114 forks source link

请教一下cpu推理比gpu快,可能是什么原因? #46

Closed hhxdestiny closed 3 years ago

hhxdestiny commented 3 years ago

使用models下的export.py(此文件未改动,来自最新的yolov5)导出模型yolov5s.pt cpu model 导出

python models\export.py --device cpu

运行

Run once on empty image
----------New Frame----------
pre-process takes : 60 ms
inference takes : 4630 ms
post-process takes : 69 ms
----------New Frame----------
pre-process takes : 77 ms
inference takes : 3762 ms
post-process takes : 155 ms

gpu model 导出

python models\export.py --device 0

运行

Run once on empty image
----------New Frame----------
pre-process takes : 40 ms
inference takes : 2766 ms
post-process takes : 1 ms
----------New Frame----------
pre-process takes : 32 ms
inference takes : 10285 ms
post-process takes : 11 ms
hhxdestiny commented 3 years ago

改了一下代码发现Run once on empty image这一次并没有预热模型,耗时最长的那一次在第二次