Open NepalLizarazu opened 1 year ago
Hi @NepalLizarazu, thanks for reporting this issue to us. We will take a look into it.
Hi @meenchen @NepalLizarazu, have you solved this problem? I'm having the same error. If you've found a solution, could you share it with me? I'd really appreciate it.
Hi @meenchen I have experienced the same issue, and I have now spend several hours trying to solve it. Unfortunately I have run into a dead end, in which I think I cannot solve myself.
I found out that the NotImplementedError
from @NepalLizarazu screenshot is raised when encountering the divisor operator. In my case, the error is raised from TTEParser.py
due to the fact that _findMultiplyAbsMaxDivide()
returns False, {}
. The reason for that method to return False, is that no if-statement is present for the case that one of the two operators passed to the function is a divide. I fixed it by adding some code for handling the divide case:
if len(ops) == 2:
if ops[0]["type"] == "abs" and ops[1]["type"] in ["multiply", "cast"]:
abs_op = ops[0]
# cast1
if ops[1]["type"] == "cast":
cast_1 = ops[1]
multiply_op = _findNextOpTakeInputName(model, cast_1["outputs"][0]["name"])
else:
multiply_op = ops[1]
################## I ADDED THIS CODE ##################
elif ops[0]["type"] == "abs" and ops[1]["type"] in ["divide"]:
abs_op = ops[0]
# cast1
if ops[1]["type"] == "cast":
cast_1 = ops[1]
divide_op = _findNextOpTakeInputName(model, cast_1["outputs"][0]["name"])
else:
divide_op = ops[1]
################## I ADDED THIS CODE ##################
else:
abs_op = ops[1]
if ops[0]["type"] == "cast":
cast_1 = ops[0]
multiply_op = _findNextOpTakeInputName(model, cast_1["outputs"][0]["name"])
else:
multiply_op = ops[0]
else:
return False, {}
if abs_op["type"] == "abs":
# max
max_op = _findNextOpTakeInputName(model, abs_op["outputs"][0]["name"])
if not max_op or max_op["type"] != "max":
return False, {}
################## I ADDED THIS CHECK FOR DIVIDE_OP ##################
if not divide_op:
################## I ADDED THIS CHECK FOR DIVIDE_OP ##################
next_of_max = _findNextOpTakeInputName(model, max_op["outputs"][0]["name"])
# -> (cast2) -> divide or divide
if next_of_max["type"] == "cast":
cast_2 = next_of_max
divide_op = _findNextOpTakeInputName(model, cast_2["outputs"][0]["name"])
else:
divide_op = _findNextOpTakeInputName(model, max_op["outputs"][0]["name"])
if not divide_op or divide_op["type"] != "divide":
return False, {}
# -> cast
cast_op = _findNextOpTakeInputName(model, divide_op["outputs"][0]["name"])
if not cast_op or cast_op["type"] != "cast":
return False, {}
################## I EXTENDED THE CHECK ##################
if (not multiply_op or multiply_op["type"] != "multiply") and (not divide_op or divide_op["type"] != "divide"):
################## I EXTENDED THE CHECK `##################
return False, {}
Note that the same changes were applied to _findTransposeMultiplyAbsMaxDivide()
I also had to modify the _removeLayers
function to avoid a ValueError
:
def _removeLayers(layers, target_dict):
for k in target_dict:
################## I ADDED THIS CHECK ##################
if target_dict[k] in layers:
################## I ADDED THIS CHECK ##################
layers.remove(target_dict[k])
return layers
If this is a proper mix, I don't know, but it did make the TTEParser.py
script finish, even though another issue appears.
I now encounter a NotImplementedError
from tinyengine/code_generator/operators/transpose_conv2d.py", line 269
.
I found out, that the reason for that error is, that some of the params-values are incorrect.
elif params["group"] == 1 and not tflite_op:
# function name
if (
params["kernel_h"] == 1
and params["kernel_w"] == 1
and params["input_h"] == 1
and params["input_w"] == 1
and params["output_h"] == 1
and params["output_w"] == 1
and params["input_c"] / 10 == 1 ******* NOT TRUE *******
and params["output_c"] % 8 == 0
):
if params["input_dtype"] == "int8":
function_name = "pointwise_conv_1row10col_10inputdepth"
else:
function_name = "pointwise_conv_fp_1row10col_10inputdepth"
elif (
params["kernel_h"] == 1
and params["kernel_w"] == 1
and params["input_h"] * params["input_w"] >= 4 ******* NOT TRUE *******
and params["output_c"] % 4 == 0
):
if params["input_dtype"] == "int8":
function_name = "pointwise_conv_4row4col"
else:
function_name = "pointwise_conv_fp_4row4col"
else:
raise NotImplementedError
More specifically, input_c
is 2 and input_h
and input_w
are both 1, hence params["input_h"] * params["input_w"] >= 4
does not evaluate to True. ``
I don't know if the error is caused by my previous fix, but I hope you can help me.
Best regards Marco
Hi, @meenchen, thanks for your great work. As I put in the title I got the following error (see image)
I started by following the tiny-training's repo compilation/README instructions, So: running the mcu_ir_gen.py and using the mcunet-5fps.pkl file.
Then I ran the ir2json.py file selecting the sparse_bp-49kb-1x3x128x128.ir just generated.
Then I took the scale.json from the ir_zoos corresponding folder (generated with mcu_gen_ir), the graph.json and the params.pkl from .model/testproj/ that were generated with ir2json, I put all three in the assets folder.
Finally I ran: python examples/tiny_training.py -f assets/sparse_bp-49kb-1x3x128x128-graph.json -D assets/sparse_bp-49kb-1x3x128x128-params.pkl -QAS assets/scale.json -m -g -d -FR Making sure it was using the same 49kb sparse update scheme, and I got that Not implemented error, is there something that I might have missed during the process?
Pd. It seems to be originating from an "abs" operation that could not be handled, but being the provided examples in tiny-training makes me think I missed something at some point. Any thoughts on it?