kendryte / nncase

Open deep learning compiler stack for Kendryte AI accelerators ✨
Apache License 2.0
747 stars 181 forks source link

yolov8 模型 -->.onnx-->kmodel在01Sudio部署时识别效果差 #1249

Closed Javachenlianghong closed 2 weeks ago

Javachenlianghong commented 1 month ago

nncase.zip

开发环境:python3.7 nncase 2.9 image

1.如果使用demo转换的可以在PC端进行模拟 f34f0196b0b0880f5d3b14459f978ebf 但是部署到板子上啥都识别不出来 2.使用YOLOV8的转换代码也是啥都识别不出来,calibration datase我试过(100,2000)张图片效果还是一样的甚至在PC端都无法模拟 image 3.onnx的推理在PC端的推理结果不错 image

4.图片 images.zip 5.detect_kmodel.py就是在k230的推理代码,PC_detect.py就是第二点报错的代码

curioyang commented 1 month ago

我这边跑没问题啊,PC_detec 里面我不确定你的路径文件是否存在,试一下新的

# run kmodel(simulate)
import os
import nncase
from nncase_base_func import *
import numpy as np
kmodel_path = "tmp/test.kmodel"
input_data = [np.random.rand(1, 3, 640, 640).astype(np.uint8)]
dump_path='./tmp/'
result = run_kmodel(kmodel_path, input_data)
for idx, i in enumerate(result):
    print(i.shape)
    i.tofile(os.path.join(dump_path,"nncase_result_{}.bin".format(idx)))

import onnxruntime as rt
model_path='./best.onnx'
onnx_model = model_simplify(model_path)
onnx_model = model_path
_, input_info = parse_model_input_output(model_path)
onnx_sess = rt.InferenceSession(onnx_model)

input_dict = {}
for i, info in enumerate(input_info):
    print(info['shape'])
    input_dict[info["name"]] = input_data[i].astype(np.float32)/255.0

onnx_results = onnx_sess.run(None, input_dict)
for index, (i, j) in enumerate(zip(onnx_results, result)):
    print("result {} cosine = ".format(index), get_cosine(i, j))