Closed sanzhang3 closed 3 months ago
@sanzhang3 Did you find a solution for the error? I am getting the same error.
@sanzhang3 Did you find a solution for the error? I am getting the same error.
Yes,you can try add args --fused_preprocess --quant_input when you run the model_deploy.py. Read TPU-MLIR_Technical_Reference_Manual.pdf 3.2.4 for detail.
@sanzhang3 Thank you. I will give it a try.
cv181x not support f16, but support bf16 and int8. you can test it by:
cd /workspace/tpu-mlir/regression/
./run_model.py yolov5s --chip cv181x --mode bf16
detect_yolov5.py --model yolov5s_cv181x_bf16.cvimodel --input /workspace/tpu-mlir/regression/image/dog.jpg --output yolov5s_bf16.jpg
在将yolov5s.onnx转换为f16精度之后,运行detect时报错。 root@2c84df3bc13a:/workspace/yolov5/yolov5s_onnx/workspace# detect_yolov5.py \