kendryte / nncase

Open deep learning compiler stack for Kendryte AI accelerators ✨
Apache License 2.0
750 stars 182 forks source link

Can't deploy kmodel model on K210 (firmware 0.6.2, nncase 1.1.0) #462

Closed narduzzi closed 2 years ago

narduzzi commented 2 years ago

Describe the bug I am trying to deploy a custom model on Maix Dock I, which firmware is 0.6.2. The board crashed when I try to load the model.

To Reproduce

The model I try to deploy is the following:

# create network
import tensorflow as tf

inputs = tf.keras.layers.Input((28,28,1))
x = tf.keras.layers.Conv2D(32,5,activation="relu")(inputs)
f = tf.keras.layers.Flatten()(x)
out = tf.keras.layers.Dense(10, activation="softmax")(f)
model = tf.keras.models.Model(inputs=inputs, outputs=out)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss="categorical_crossentropy")

Then I convert it to ONNX using tf2onnx

import tf2onnx
import onnxruntime as rt

spec = (tf.TensorSpec((None, 28, 28, 1), tf.float32, name="input"),)
output_path = model.name + ".onnx"

model_proto, _ = tf2onnx.convert.from_keras(model, input_signature=spec, opset=13, output_path=output_path)

Finally, I convert it to kmodel using the tutorial found in the doc:

model_file = 'model.onnx'
target = 'k210'

# onnx simplify from documentation
model_file = onnx_simplify(model_file)

# compile_options
compile_options = nncase.CompileOptions()
compile_options.target = target
compile_options.dump_ir = True
compile_options.dump_asm = True
compile_options.dump_dir = 'tmp'
# quantize model
# compile_options.quant_type = 'uint8' # or 'int8'
# compiler
compiler = nncase.Compiler(compile_options)

# import_options
import_options = nncase.ImportOptions()

# import
model_content = read_model_file(model_file)
compiler.import_onnx(model_content, import_options)

# compile
compiler.compile()

# kmodel
kmodel = compiler.gencode_tobytes()
with open('test.kmodel', 'wb') as f:
    f.write(kmodel)

I then upload the model to /flash/ of the Maix Dock using rshell. However, when I try to load it in micropython

...
kpu.load("/flash/test.kmodel")

The board crashes without any error nor debug message.

What could be the source of the problem?

Expected behavior

The model should be loaded in the KPU and allow inference, or an error message should be displayed.

Origin model and code

This zip contains the different models generated by the code. models.zip

Environment (please complete the following information):

sunnycase commented 2 years ago

Please follow the deploy section in the usage document https://github.com/kendryte/nncase/blob/master/docs/USAGE_EN.md#k210 If you are using maixpy instead of kendryte-standalone-sdk you should open an issue in maixpy.