rockchip-linux / rknn-toolkit

BSD 3-Clause "New" or "Revised" License
808 stars 173 forks source link

Problems encountered when converting tensorflow models to rknn models #448

Open commanderina opened 5 months ago

commanderina commented 5 months ago

The following is my code, and the issue occurred in the loading model section:

from rknn.api import RKNN

if name == 'main':

确定目标设备target

target = 'rv1126'

# 创建RKNN对象
rknn = RKNN()

# 配置RKNN模型
print('--> config model')
rknn.config(mean_values=[[127.5, 127.5, 127.5]],
            std_values=[[127.5, 127.5, 127.5]],
            reorder_channel='0 1 2',
            target_platform=[target])
print('done')

# 加载TF模型
print('--> loading model')
ret = rknn.load_tensorflow(tf_pb='/home/alientek/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb',
                           inputs=['FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1'],
                           outputs=['concat', 'concat_1'],
                           input_size_list=[[300, 300, 3]])
if ret != 0:
    print('load model failed!')
    rknn.release()
    exit(ret)
print('done')

When loading the model, an error will be reported as follows:

W:tensorflow:From /home/alientek/anaconda3/envs/py3.8-rknn-1.7.5/lib/python3.8/site-packages/rknn/api/rknn.py:107: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph E Catch exception when loading tensorflow model: /home/alientek/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb! E Traceback (most recent call last): E File "rknn/api/rknn_base.py", line 253, in rknn.api.rknn_base.RKNNBase.load_tensorflow E File "rknn/base/RKNNlib/RK_nn.py", line 52, in rknn.base.RKNNlib.RK_nn.RKnn.load_tensorflow E File "rknn/base/RKNNlib/app/importer/import_tensorflow.py", line 110, in rknn.base.RKNNlib.app.importer.import_tensorflow.Importensorflow.run E File "rknn/base/RKNNlib/converter/convert_tf.py", line 103, in rknn.base.RKNNlib.converter.convert_tf.convert_tf.init E File "rknn/base/RKNNlib/converter/tensorflowloader.py", line 45, in rknn.base.RKNNlib.converter.tensorflowloader.TF_Graph_Preprocess.init E AttributeError: 'NoneType' object has no attribute 'op' E Please feedback the detailed log file to the RKNN Toolkit development team. E You can also check github issues: https://github.com/rockchip-linux/rknn-toolkit/issues load model failed!

I don't know how to solve it! >_< Please help me! Thanks for everyone!

a18361351 commented 1 month ago

Got same problem, did you solve it?

a18361351 commented 1 month ago

It seems that the problem is at ret = rknn.load_tensorflow(tf_pb='/home/alientek/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb', inputs=['FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1'], outputs=['concat', 'concat_1'], input_size_list=[[300, 300, 3]]), the parameter "input" doesn't match model's input node. I changed it to inputs=['FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D'] and solved the problem for me.