Closed tonglingwen closed 9 months ago
看报错是shape不匹配,可以看下是不是导出参数设置问题
看报错是shape不匹配,可以看下是不是导出参数设置问题
你好 这个问题解决了 但是我用的是rv1126,它不是rknpu2,如果使用FastDeploy 以-DWITH_TIMVX=ON的方式进行编译。模型格式该是什么呢(不是rknn或者onnx吧?)?如何转换?
这个可以到FastDeploy下面提问
PaddleOCR 移植到瑞芯微rv1126上面,rknn-toolkit所在的系统为ubuntu18.04.1,要转换的模型为:ch_PP-OCRv3_rec_infer.onnx 利用rknn-toolkit(1.7.3) onnx转rknn时报如下错误: 你好,请问ch_PP-OCRv3_rec模型可以正常转rknn模型吗?有做什么修改吗?为啥我转的时候一直报reshape的问题?该如何修改呢? E ValueError: Cannot reshape a tensor with 360 elements to shape [0,40,120] (0 elements) for '{{node Reshape_Reshape_8_82/Reshape_Reshape_8_82}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](Transpose_Transpose_3_93/Transpose_Transpose_3_93, Reshape_Reshape_8_82/Reshape_Reshape_8_82/shape)' with input shapes: [1,15,3,8], [3] and with input tensors computed as partial shapes: input[1] = [0,40,120]. E Please feedback the detailed log file
to the RKNN Toolkit development team.
batchsize要设置成3,要不然转不了
batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?
Create RKNN object
rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)
# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0.0, 0.0, 0.0]],
std_values=[[1.0, 1.0, 1.0]],
reorder_channel='0 1 2',
target_platform='rv1126',
batch_size=3
)
# quantized_algorithm: normal(default), mmse, kl_divergence, moving_average
print('done')
# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
print('Load onnx model failed!')
exit(ret)
print('done')
# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset=DATASET)
if ret != 0:
print('Build rknn model failed!')
exit(ret)
print('done')
# Export RKNN model
print('--> Export RKNN model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
print('Export rknn failed!')
exit(ret)
print('done')
batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?
Create RKNN object
rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1) # pre-process config print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 ) # quantized_algorithm: normal(default), mmse, kl_divergence, moving_average print('done') # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done') # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done') # Export RKNN model print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done')
` rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)
print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 )
print('done')
print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done')
print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done')
print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done') ` 这是我的脚本能装成rknn 但是转完之后在rv1126上面还是跑不起来,你可以试试 如果能跑起来可以交流一下。
batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?
Create RKNN object
rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1) # pre-process config print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 ) # quantized_algorithm: normal(default), mmse, kl_divergence, moving_average print('done') # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done') # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done') # Export RKNN model print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done')
` rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)
pre-process config
print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 )
quantized_algorithm: normal(default), mmse, kl_divergence, moving_average
print('done')
Load ONNX model
print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done')
Build model
print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done')
Export RKNN model
print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done') ` 这是我的脚本能装成rknn 但是转完之后在rv1126上面还是跑不起来,你可以试试 如果能跑起来可以交流一下。
麻烦加个qq交流下吧:1054006283
batchsize要设置成3,要不然转不了 您好,这是我的转换脚本,设置成3了,还是报一样的错误,不知道什么原因呢?
Create RKNN object
rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1) # pre-process config print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 ) # quantized_algorithm: normal(default), mmse, kl_divergence, moving_average print('done') # Load ONNX model print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done') # Build model print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done') # Export RKNN model print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done')
` rknn = RKNN() if not os.path.exists(ONNX_MODEL): print('model not exist') exit(-1)
pre-process config
print('--> Config model') rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[1.0, 1.0, 1.0]], reorder_channel='0 1 2', target_platform='rv1126', batch_size=3 )
quantized_algorithm: normal(default), mmse, kl_divergence, moving_average
print('done')
Load ONNX model
print('--> Loading model') ret = rknn.load_onnx(model=ONNX_MODEL) if ret != 0: print('Load onnx model failed!') exit(ret) print('done')
Build model
print('--> Building model') ret = rknn.build(do_quantization=True, dataset=DATASET) if ret != 0: print('Build rknn model failed!') exit(ret) print('done')
Export RKNN model
print('--> Export RKNN model') ret = rknn.export_rknn(RKNN_MODEL) if ret != 0: print('Export rknn failed!') exit(ret) print('done') ` 这是我的脚本能装成rknn 但是转完之后在rv1126上面还是跑不起来,你可以试试 如果能跑起来可以交流一下。
麻烦加个qq交流下吧:1054006283
你这个QQ搜不到,我的2528514903
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
PaddleOCR 移植到瑞芯微rv1126上面,rknn-toolkit所在的系统为ubuntu18.04.1,要转换的模型为:ch_PP-OCRv3_rec_infer.onnx 利用rknn-toolkit(1.7.3) onnx转rknn时报如下错误: