PaddlePaddle / PaddleOCR

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
https://paddlepaddle.github.io/PaddleOCR/
Apache License 2.0
44.31k stars 7.83k forks source link

在部署rknn模型时按照要求onnx转rknn过程中报如下错误 #10155

Closed tonglingwen closed 9 months ago

tonglingwen commented 1 year ago

PaddleOCR 移植到瑞芯微rv1126上面,rknn-toolkit所在的系统为ubuntu18.04.1,要转换的模型为:ch_PP-OCRv3_rec_infer.onnx 利用rknn-toolkit(1.7.3) onnx转rknn时报如下错误: image

andyjiang1116 commented 1 year ago

看报错是shape不匹配,可以看下是不是导出参数设置问题

tonglingwen commented 1 year ago

看报错是shape不匹配,可以看下是不是导出参数设置问题

你好 这个问题解决了 但是我用的是rv1126,它不是rknpu2,如果使用FastDeploy 以-DWITH_TIMVX=ON的方式进行编译。模型格式该是什么呢(不是rknn或者onnx吧?)?如何转换?

andyjiang1116 commented 1 year ago

这个可以到FastDeploy下面提问

winterxx commented 1 year ago

PaddleOCR 移植到瑞芯微rv1126上面,rknn-toolkit所在的系统为ubuntu18.04.1,要转换的模型为:ch_PP-OCRv3_rec_infer.onnx 利用rknn-toolkit(1.7.3) onnx转rknn时报如下错误: image 你好,请问ch_PP-OCRv3_rec模型可以正常转rknn模型吗?有做什么修改吗?为啥我转的时候一直报reshape的问题?该如何修改呢? E ValueError: Cannot reshape a tensor with 360 elements to shape [0,40,120] (0 elements) for '{{node Reshape_Reshape_8_82/Reshape_Reshape_8_82}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](Transpose_Transpose_3_93/Transpose_Transpose_3_93, Reshape_Reshape_8_82/Reshape_Reshape_8_82/shape)' with input shapes: [1,15,3,8], [3] and with input tensors computed as partial shapes: input[1] = [0,40,120]. E Please feedback the detailed log file to the RKNN Toolkit development team.

tonglingwen commented 1 year ago

batchsize要设置成3,要不然转不了

winterxx commented 1 year ago

batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?

Create RKNN object

rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)

# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0.0, 0.0, 0.0]],
             std_values=[[1.0, 1.0, 1.0]],
             reorder_channel='0 1 2',
             target_platform='rv1126',
             batch_size=3
             )
# quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
    print('Load onnx model failed!')
    exit(ret)
print('done')

# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset=DATASET)
if ret != 0:
    print('Build rknn model failed!')
    exit(ret)
print('done')

# Export RKNN model
print('--> Export RKNN model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
    print('Export rknn failed!')
    exit(ret)
print('done')
tonglingwen commented 1 year ago

batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?

Create RKNN object

rknn = RKNN()
if not os.path.exists(ONNX_MODEL):
    print('model not exist')
    exit(-1)

# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0.0, 0.0, 0.0]],
             std_values=[[1.0, 1.0, 1.0]],
             reorder_channel='0 1 2',
             target_platform='rv1126',
             batch_size=3
             )
# quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
    print('Load onnx model failed!')
    exit(ret)
print('done')

# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset=DATASET)
if ret != 0:
    print('Build rknn model failed!')
    exit(ret)
print('done')

# Export RKNN model
print('--> Export RKNN model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
    print('Export rknn failed!')
    exit(ret)
print('done')

` rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)

pre-process config

print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 )

quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

Load ONNX model

print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done')

Build model

print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done')

Export RKNN model

print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done') ` 这是我的脚本能装成rknn 但是转完之后在rv1126上面还是跑不起来,你可以试试 如果能跑起来可以交流一下。

winterxx commented 1 year ago

batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?

Create RKNN object

rknn = RKNN()
if not os.path.exists(ONNX_MODEL):
    print('model not exist')
    exit(-1)

# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0.0, 0.0, 0.0]],
             std_values=[[1.0, 1.0, 1.0]],
             reorder_channel='0 1 2',
             target_platform='rv1126',
             batch_size=3
             )
# quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
    print('Load onnx model failed!')
    exit(ret)
print('done')

# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset=DATASET)
if ret != 0:
    print('Build rknn model failed!')
    exit(ret)
print('done')

# Export RKNN model
print('--> Export RKNN model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
    print('Export rknn failed!')
    exit(ret)
print('done')

` rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)

pre-process config

print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 )

quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

Load ONNX model

print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done')

Build model

print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done')

Export RKNN model

print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done') ` 这是我的脚本能装成rknn 但是转完之后在rv1126上面还是跑不起来,你可以试试 如果能跑起来可以交流一下。

麻烦加个qq交流下吧:1054006283

tonglingwen commented 1 year ago

batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?

Create RKNN object

rknn = RKNN()
if not os.path.exists(ONNX_MODEL):
    print('model not exist')
    exit(-1)

# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0.0, 0.0, 0.0]],
             std_values=[[1.0, 1.0, 1.0]],
             reorder_channel='0 1 2',
             target_platform='rv1126',
             batch_size=3
             )
# quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
    print('Load onnx model failed!')
    exit(ret)
print('done')

# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset=DATASET)
if ret != 0:
    print('Build rknn model failed!')
    exit(ret)
print('done')

# Export RKNN model
print('--> Export RKNN model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
    print('Export rknn failed!')
    exit(ret)
print('done')

` rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)

pre-process config

print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 )

quantized_algorithm: normal(default), mmse, kl_divergence, moving_average

print('done')

Load ONNX model

print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done')

Build model

print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done')

Export RKNN model

print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done') ` 这是我的脚本能装成rknn 但是转完之后在rv1126上面还是跑不起来,你可以试试 如果能跑起来可以交流一下。

麻烦加个qq交流下吧:1054006283

你这个QQ搜不到,我的2528514903

github-actions[bot] commented 10 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.