Closed guishilike closed 2 years ago
@guishilike hello, I haven't tried it yet, but if the half precision onnx model works in python onnxruntime inference, it should work in c++ version as well, if I'm not mistaken there are no additional flags for fp16 onnxruntime inference.
Official yolov5 PyTorch repo uses half precision. I try the onnx model with half precision on python, and speed increased. Can this repo support half precision?