Closed Tianyun-Xuan closed 2 months ago
我去 这错误从没见过
我去 这错误从没见过
我在转换我自己训练的模型 只使用了Conv2d BatchNormal2d 还有ReLU 需要提供更多信息吗,如果需要,我要准备哪些信息呢?
你这么简单的模型不应该报错的,先看看你转换的过程有没有问题,环境 版本 代码
你这么简单的模型不应该报错的,先看看你转换的过程有没有问题,环境 版本 代码
NAME STATE VERSION
➜ ~ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.2 LTS Release: 22.04 Codename: jammy
name: rknn
channels:
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- bzip2=1.0.8=hd590300_5
- ca-certificates=2024.7.4=hbcca054_0
- ld_impl_linux-64=2.40=hf3520f5_7
- libblas=3.9.0=22_linux64_openblas
- libcblas=3.9.0=22_linux64_openblas
- libexpat=2.6.2=h59595ed_0
- libffi=3.4.2=h7f98852_5
- libgcc-ng=14.1.0=h77fa898_0
- libgfortran-ng=14.1.0=h69a702a_0
- libgfortran5=14.1.0=hc5f4f2c_0
- libgomp=14.1.0=h77fa898_0
- liblapack=3.9.0=22_linux64_openblas
- libnsl=2.0.1=hd590300_0
- libopenblas=0.3.27=pthreads_hac2b453_1
- libsqlite=3.46.0=hde9e2c9_0
- libstdcxx-ng=14.1.0=hc0a3c3a_0
- libuuid=2.38.1=h0b41bf4_0
- libxcrypt=4.4.36=hd590300_1
- libzlib=1.3.1=h4ab18f5_1
- ncurses=6.5=h59595ed_0
- numpy=1.24.4=py311h64a7726_0
- openssl=3.3.1=h4ab18f5_1
- pip=24.0=pyhd8ed1ab_0
- python=3.11.9=hb806964_0_cpython
- python_abi=3.11=4_cp311
- readline=8.2=h8228510_1
- setuptools=70.2.0=pyhd8ed1ab_0
- tk=8.6.13=noxft_h4845f30_101
- tzdata=2024a=h0c530f3_0
- wheel=0.43.0=pyhd8ed1ab_1
- xz=5.2.6=h166bdaf_0
import argparse
import os
import sys
from rknn.api import RKNN # type: ignore
QUANTIZE_ON = None
def main():
# Create RKNN object
rknn = RKNN(verbose=True)
# pre-process config
print('--> Config model')
rknn.config(target_platform="rk3588")
print('done')
# Load ONNX model
print('--> Loading model')
ret = rknn.load_onnx(model="/home/demo/workspace/model.onnx")
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')
# Build model
print('--> Building model')
ret = rknn.build(do_quantization=QUANTIZE_ON)
if ret != 0:
print('Build model failed!')
exit(ret)
print('done')
# Export RKNN model
print('--> Export rknn model')
ret = rknn.export_rknn("model.rknn")
if ret != 0:
print('Export rknn model failed!')
exit(ret)
print('done')
rknn.release()
if __name__ == '__main__':
main()
I rknn-toolkit2 version: 2.0.0b0+9bab5682
E init_runtime: RKNN model does not exist, please load & build model first!
E get_sdk_version: The runtime has not been initialized, please call init_runtime first!
I ===================== WARN(0) =====================
E rknn-toolkit2 version: 2.0.0b0+9bab5682
I rknn-toolkit2 version: 2.0.0b0+9bab5682
I Loading : 100%|████████████████████████████████████████████████| 18/18 [00:00<00:00, 74602.25it/s]
W load_onnx: The config.mean_values is None, zeros will be set for input 0!
W load_onnx: The config.std_values is None, ones will be set for input 0!
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
D fold_constant done.
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I rknn building ...
E RKNN: [18:47:19.736] Failed to config layer: 'Conv:/lila2/conv/conv/Conv', Fatal Error
E RKNN: [18:47:19.736] core_num 3 ori_Ih 126 ori_Iw 1198 ori_Ic 384 ori_Ib 1
E RKNN: [18:47:19.736] ori_Kh 1 ori_Kw 1 ori_Kk 128 ori_Kc 384 ori_Ksx 1 ori_Ksy 1
E RKNN: [18:47:19.736] ori_Oh 128 oriOw 1200 oriOc 128 pad_t 1 pad_b 1 pad_l 1 pad_r 1,
E RKNN: [18:47:19.736] dilation_h 1 dilation_w 1 is_deconv 0,
E RKNN: [18:47:19.736] Please help report this bug!
[1] 17745 IOT instruction python3 src/onnx2rknn.py >> log.txt
我想到的解决方案 1、更换rknn版本再次尝试 ,包括老版本和bate版本;2、满共才18层,你看看是不是你卷积的参数写的有问题,确定pc端onnxruntime可以推理是吗
我想到的解决方案 1、更换rknn版本再次尝试 ,包括老版本和bate版本;2、满共才18层,你看看是不是你卷积的参数写的有问题,确定pc端onnxruntime可以推理是吗
我尝试了onnxruntime 和 使用nvdia的工具转换成TensorRT模型进行推理,模型和结果都是正确的
我使用的rknn-toolkit2 的master分支,我看了下whl的编号和beta的tag是一致的 rknn_toolkit2-2.0.0b0+9bab5682-cp310-cp310-linux_x86_64.whl
去这里下载最新的bate版本:Download You can also download all packages, docker image, examples, docs and platform-tools from RKNPU2_SDK, fetch code: rknn You can get more examples from rknn mode zoo
去这里下载最新的bate版本:Download You can also download all packages, docker image, examples, docs and platform-tools from RKNPU2_SDK, fetch code: rknn You can get more examples from rknn mode zoo
我尝试了V2.0.0b22 版本的python3.10 和 python3.11 的whl,还是一样的报错
I rknn-toolkit2 version: 2.0.0b22+de81ff8b
--> Config model
done
--> Loading model
I Loading : 100%|████████████████████████████████████████████████| 18/18 [00:00<00:00, 33825.03it/s]
W load_onnx: The config.mean_values is None, zeros will be set for input 0!
W load_onnx: The config.std_values is None, ones will be set for input 0!
done
--> Building model
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
D fold_constant done.
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I rknn building ...
I RKNN: [16:46:36.101] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0, pipeline_fuse = 0
I RKNN: librknnc version: 2.0.0b22 (97e97faf01@2024-07-15T06:16:27)
D RKNN: [16:46:36.103] RKNN is invoked
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [16:46:36.109] >>>>>> start: OpEmit
D RKNN: [16:46:36.109] <<<<<<<< end: OpEmit
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [16:46:36.109] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [16:46:36.109] >>>>>> start: rknn::RKNNTilingPass
E RKNN: [16:46:36.124] Failed to config layer: 'Conv:/lila2/conv/conv/Conv', Fatal Error
E RKNN: [16:46:36.124] core_num 3 ori_Ih 126 ori_Iw 1198 ori_Ic 384 ori_Ib 1
E RKNN: [16:46:36.124] ori_Kh 1 ori_Kw 1 ori_Kk 128 ori_Kc 384 ori_Ksx 1 ori_Ksy 1
E RKNN: [16:46:36.124] ori_Oh 128 oriOw 1200 oriOc 128 pad_t 1 pad_b 1 pad_l 1 pad_r 1,
E RKNN: [16:46:36.124] dilation_h 1 dilation_w 1 is_deconv 0,
E RKNN: [16:46:36.124] Please help report this bug!
[1] 3011544 IOT instruction python3 src/onnx2rknn.py
你卷积的参数是多少?和文档中限制的大小一样吗?如果这也没问题,就等官方的人来回答吧 另外说一下,不同python版本的工具是一样的 不同bate版本是不一样的,而且b22版本很不好用
你卷积的参数是多少?和文档中限制的大小一样吗?如果这也没问题,就等官方的人来回答吧 另外说一下,不同python版本的工具是一样的 不同bate版本是不一样的,而且b22版本很不好用
衷心感谢你的回复 我在文档只找到了对卷积尺寸的性能推荐,有明确限制的卷积尺寸吗,想请教下在哪里可以看到 我的输入是 [4, 128,1200] 卷积参数有四组:
05_RKNN_Compiler_Support_Operator_List_v2.0.0beta0.pdf
你这个卷积的参数很普通,我也不清楚问题是肾么
你可以尝试注释掉某几个卷积再转换,看看是哪一个卷积的问题
05_RKNN_Compiler_Support_Operator_List_v2.0.0beta0.pdf 你这个卷积的参数很普通,我也不清楚问题是肾么 你可以尝试注释掉某几个卷积再转换,看看是哪一个卷积的问题
多谢 我来尝试尝试
使用rknn-toolkit2 在wsl上转换onnx模型为rknn模型
conda python3.10 环境 python3.10 的whl + numpy1.24 出现以下报错
E RKNN: [14:40:48.150] Failed to config layer: 'Conv:/lila2/conv/conv/Conv', Fatal Error E RKNN: [14:40:48.150] core_num 3 ori_Ih 126 ori_Iw 1198 ori_Ic 384 ori_Ib 1 E RKNN: [14:40:48.150] ori_Kh 1 ori_Kw 1 ori_Kk 128 ori_Kc 384 ori_Ksx 1 ori_Ksy 1 E RKNN: [14:40:48.150] ori_Oh 128 oriOw 1200 oriOc 128 pad_t 1 pad_b 1 pad_l 1 pad_r 1, E RKNN: [14:40:48.150] dilation_h 1 dilation_w 1 is_deconv 0, E RKNN: [14:40:48.150] Please help report this bug!