majianjia / nnom

A higher-level Neural Network library for microcontrollers.
Apache License 2.0
815 stars 235 forks source link

local variable 'bias_shift' referenced before assignment #178

Open ken4647 opened 1 year ago

ken4647 commented 1 year ago
Traceback (most recent call last):
    generate_model(model=loaded,x_test=representative_dataset_gen(),name="weights/nnom_weight.h")
  File "C:\Users\Fake Bug\Desktop\modeltransfer\nn_scripts\nnom.py", line 750, in generate_model
    quantize_weights(model, per_channel_quant=per_channel_quant, name=name, format=format, layer_q_list=layer_q_list)
  File "C:\Users\Fake Bug\Desktop\modeltransfer\nn_scripts\nnom.py", line 733, in quantize_weights
    f.write('#define ' + layer.name.upper() + '_BIAS_LSHIFT '+to_cstyle(bias_shift) +'\n\n')
UnboundLocalError: local variable 'bias_shift' referenced before assignment

The problem I encountered when I tried to convert h5 model(which convert from onnx model) to weights.h. Its reason seems to be there:

def is_shift_layer(layer):
    ''' layer which can change the output encoding'''
    #FIXME: add more which will change the output shift
    if('input' in layer.name or
       'conv2d' in layer.name or
       'conv1d' in layer.name or
       'dense' in layer.name or
       'softmax' in layer.name or
        'sigmoid' in layer.name or
        'tanh' in layer.name or
        ('add' in layer.name and 'zero' not in layer.name) or # the name, zero_padding contains 'add'
        'subtract' in layer.name or
        'multiply' in layer.name or
       ('activation' in layer.name and layer.get_config()['activation'] == 'softmax')or
        ('activation' in layer.name and layer.get_config()['activation'] == 'hard_sigmoid') or
        ('activation' in layer.name and layer.get_config()['activation'] == 'tanh') or
        ('activation' in layer.name and layer.get_config()['activation'] == 'hard_tanh') or
        is_rnn_layer(layer)
    ):
        return True
    return False

While my layer's name like LAYER_0 . Is it right? The question is why the layer is judged on its name attribute instead of type() (just as a beginner)? And is there a good way to fix it?

ken4647 commented 1 year ago

image Here is my h5 model.

majianjia commented 1 year ago

Please check if you have enable bias for conv layers. Conv layers must have bias for successful conversion, it is a requirement for the backend.

ken4647 commented 1 year ago

Thanks for your reply!I have confirmed that all biases in the model have been enabled. I have try to fix this promblem by change the name of layer.After struggling for days, I have successfully convert my model to nnom's. However, the result is not right, as I find that ONNX model is in the format of NCHW and h5 model is default in NHWC, while cause my model can't produce true result. image And I want to ask if there any other way to convert other formats of model like pytorch/tflite/onnx/savedmodel? Or is there any good way to convert onnx model to h5 or nnom model? I have used the onnx2keras, which stop updating for too long. keras is simple of course, however re-train all my model is really a tough thing. Thanks for your reply!

ken4647 commented 1 year ago

As Keras is in format of NHWC, nnom seems directly do conv2d and pad and maxpool at the first two dimensions, make the computation become totally wrong.

majianjia commented 1 year ago

I only tested it in keras/tf2. ONNX model are not tested. Looks like the data format is an issue.