microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
13.92k stars 2.81k forks source link

onnxruntime 在C++上如何实现fp16的推理 yolov5模型 #20395

Open hkdddld opened 4 months ago

hkdddld commented 4 months ago

Describe the issue

我在推理fp16的yolov5模型时通过 微信图片_20240420200338 这样转换出来推理不出结果是为什么

To reproduce

微信图片_20240420200338

Urgency

No response

Platform

Linux

OS Version

22.0

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.10.1

ONNX Runtime API

C++

Architecture

X86

Execution Provider

Default CPU

Execution Provider Library Version

No response

Model File

yolov5s官网的模型

Is this a quantized model?

Yes

yihonglyu commented 4 months ago

@hkdddld Could you please share the entire reproducer in text format so that I can execute it?

hkdddld commented 3 months ago

您能否以文本格式共享整个复制器,以便我可以执行它? [Uploading YOLOV5.txt…]()

hkdddld commented 3 months ago

YOLOV5.txt

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

luozhaohui800 commented 1 month ago

楼主解决了吗,c++ fp16超分模型也推理错误了,python是可以正常推理的

yihonglyu commented 1 month ago

onnxruntime is 1.18.1 now. Do you encounter the same issue with latest commit?

hkdddld commented 1 month ago

楼主解决了吗,C++ FP16超分模型也推理错误了,python是可以正常推理的

没有解决

hkdddld commented 1 month ago

onnxruntime 现在是 1.18.1。您在最新提交时是否遇到同样的问题?

现在没试过

yihonglyu commented 1 month ago

Could you use the latest commit on main or release (i.e., 1.18.1) and see whether the issue is gone?

DingHsun commented 1 month ago

這是我找到的參考專案 https://github.com/Amyheart/yolov5v8-dnn-onnxruntime ,裡面支援YOLO_ORIGIN_V5_HALF(FP16),但實際上運行起來要比FP32的yolov5.onnx要慢,我的顯示卡是RTX4070。