airockchip / rknn-toolkit2

Other
929 stars 96 forks source link

rknn-toolkit2-lite部署rknn模型时只能支持argb格式吗 #99

Open lk07828 opened 3 months ago

lk07828 commented 3 months ago

image

yuyun2000 commented 3 months ago

不是的,lite可以支持2d3d4d的各种输入,多输入什么都可以 你这是不是设置mean和std啥的了

lk07828 commented 3 months ago

@yuyun2000 感谢回复!我训练、转换和部署的过程如下。

  1. 数据集如下格式

image

  1. 通过tensorflow训练并转换为tflite格式
model = tf.keras.Sequential([tf.keras.layers.Dense(10, input_shape=(3,), activation='relu'),  tf.keras.layers.Dense(1)])
model.compile(optimizer='adam', loss='mse' )
model.fit(x, y, epochs=500)

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('advertising_model.tflite', 'wb') as f:
    f.write(tflite_model)
  1. 转换模型成功
    
    import numpy as np
    import pandas as pd
    from rknn.api import RKNN
    import matplotlib.pyplot as plt

rknn = RKNN(verbose=True)

Pre-process config

print('--> Config model') rknn.config(target_platform='rk3588') print('done')

Load model

print('--> Loading model') ret = rknn.load_tflite(model='advertising_model.tflite') if ret != 0: print('Load model failed!') exit(ret) print('done')

Build model

print('--> Building model') ret = rknn.build(do_quantization=False) if ret != 0: print('Build model failed!') exit(ret) print('done')

Export rknn model

print('--> Export rknn model') ret = rknn.export_rknn('./advertising_model.rknn') if ret != 0: print('Export rknn model failed!') exit(ret) print('done')


4. 部署到rk3588板上无法正常推理。
```python
import numpy as np
import pandas as pd
from rknnlite.api import RKNNLite

rknn_model = "advertising_model.rknn"
rknn_lite = RKNNLite(verbose=True)

print('--> Load RKNN model')
ret = rknn_lite.load_rknn(rknn_model)
if ret != 0:
    print('Load RKNN model failed')
    exit(ret)

print('done')

ret = rknn_lite.init_runtime(core_mask=RKNNLite.NPU_CORE_AUTO)
if ret != 0:
    print('Init runtime environment failed!')
    exit(ret)
print('done')

feature = np.array([24.1, 30.0, 51.0])
quantized_feature = np.round(feature).astype(np.float16)
input_data = quantized_feature.reshape(1, -1)
data_format = 'nhwc'
inputs_pass_throught = [0]
output = rknn_lite.inference(inputs=[input_data], data_format=data_format, inputs_pass_through=inputs_pass_throught)
print(output)

期间日志信息如帖子主题图片中所述。

无论如何修改feature的值,output输出始终为[array([[1.1279297]], dtype=float32)]

没有头绪!!!请求帮助!

yuyun2000 commented 3 months ago

可不可以先尝试连接板子到rknntoolkit,用toolkit的inference接口进行推理,看看结果? 参考这个代码可以连板推理:https://github.com/yuyun2000/rkan/blob/main/rk/infer.py

lk07828 commented 3 months ago

@yuyun2000 尝试了在PC上使用rknn-toolkit2连接RK3588进行推理。结果如下: image

任意输入推理出的结果都是1.09。

RK3588 rknn-server打印了如下信息: image

yuyun2000 commented 3 months ago

看见你的版本落后于最新版本很多,不如先更新一下版本?

lk07828 commented 3 months ago

好的,我尝试下新版本。

happyme531 commented 3 months ago

image

这个不是报错,这个是说输入是1,2,4通道的4维数据(也就是argb格式的数据)时NPU可以对输入进行减均值除方差操作,而在其它情况下这个工作需要CPU来做。造成的结果就是速度会慢一点点。这不是问题的根源。

lk07828 commented 2 months ago

各位,我升级到2.0版本的rknn-toolkit2和rknn-toolkit2-lite后,在PC上连接RK3588推理以及在RK3588上推理,结果符合预期了。确实是版本的问题。

yuyun2000 commented 2 months ago

是吧