airockchip / rknn_model_zoo

Apache License 2.0
1.07k stars 196 forks source link

LPRNet:本仓的模型和例子,在RK3588板上运行,结果不符合预期 #225

Open chuqingq opened 3 days ago

chuqingq commented 3 days ago

步骤:

  1. 按照官方说明的方式,用下载onnx模型,并转换为rknn模型。
  2. 把lprnet.rknn模型和本仓的lprnet下的test.jpg上传到RK3588上推理,结果是:
车牌识别结果: 湘PCL

预期应该是:

车牌识别结果: 湘F6CL03

说明:

本仓的lprnet.py是adb连接板子并运行推理,我采用的方式是把lprnet.rknn、test.jpg上传到RK3588板子上进行推理。

推理代码的主要逻辑是从本仓的lprnet.py修改来的,主要区别是把RKNN换成了RKNNLite,并增加了cvtColor和expand_dims等处理步骤:


if __name__ == "__main__":
    model_path = "./lprnet.rknn"
    target = "rk3588"

    # Create RKNN object
    rknn_lite = RKNNLite(verbose=False)

    # Load RKNN model
    ret = rknn_lite.load_rknn(model_path)
    if ret != 0:
        print('Load RKNN model "{}" failed!'.format(model_path))
        exit(ret)
    print("done")

    print(target)

    # Init runtime environment
    print("--> Init runtime environment")
    ret = rknn_lite.init_runtime()
    if ret != 0:
        print("Init runtime environment failed!")
        exit(ret)
    print("done")

    # Set inputs
    img = cv2.imread("test.jpg")
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = cv2.resize(img, (94, 24))
    img = np.expand_dims(img, 0)

    # Inference
    print("--> Running model")
    outputs = rknn_lite.inference(inputs=[img])

    # Post Process
    print("--> PostProcess")
    labels, pred_labels = decode(outputs[0], CHARS)
    print("车牌识别结果: " + str(labels[0]))

    # Release
    rknn_lite.release()

请教各位大佬,按道理运行官方例子结果应该不会偏差这么大,可能是什么原因呢?拜谢!

chuqingq commented 3 days ago

补充说明:

我模型转换用的命令是本仓lprnet下Readme.md中的例子:

python convert.py ../model/lprnet.onnx rk3588