Open e2r-htz opened 4 years ago
Also KeyError: 'Resize'
@e2r-htz , found any workaround? Do share if you find one.
Facing same issue: KeyError: 'min'
I have come here to rescue you guys. This bug is because of conflict among version of onnx and the onnx torch use to export. By inspecting the file in onnx, you guys con fine the key here is not match with the newest onnx ops converter. The correct dict now is here:
AVAILABLE_CONVERTERS = {
'Conv': convert_conv,
'ConvTranspose': convert_convtranspose,
'Relu': convert_relu,
'Elu': convert_elu,
'LeakyRelu': convert_lrelu,
'Sigmoid': convert_sigmoid,
'Tanh': convert_tanh,
'Selu': convert_selu,
'Clip': convert_clip,
'Exp': convert_exp,
'Log': convert_log,
'Softmax': convert_softmax,
'PRelu': convert_prelu,
'ReduceMax': convert_reduce_max,
'ReduceSum': convert_reduce_sum,
'ReduceMean': convert_reduce_mean,
'Pow': convert_pow,
'Slice': convert_slice,
'Squeeze': convert_squeeze,
'Expand': convert_expand,
'Sqrt': convert_sqrt,
'Split': convert_split,
'Cast': convert_cast,
'Floor': convert_floor,
'Identity': convert_identity,
'ArgMax': convert_argmax,
'ReduceL2': convert_reduce_l2,
'Max': convert_max,
'Min': convert_min,
'Mean': convert_mean,
'Div': convert_elementwise_div,
'Add': convert_elementwise_add,
'Sum': convert_elementwise_add,
'Mul': convert_elementwise_mul,
'Sub': convert_elementwise_sub,
'Gemm': convert_gemm,
'MatMul': convert_gemm,
'Transpose': convert_transpose,
'Constant': convert_constant,
'BatchNormalization': convert_batchnorm,
'InstanceNormalization': convert_instancenorm,
'Dropout': convert_dropout,
'LRN': convert_lrn,
'MaxPool': convert_maxpool,
'AveragePool': convert_avgpool,
'GlobalAveragePool': convert_global_avg_pool,
'Shape': convert_shape,
'Gather': convert_gather,
'Unsqueeze': convert_unsqueeze,
'Concat': convert_concat,
'Reshape': convert_reshape,
'Pad': convert_padding,
'Flatten': convert_flatten,
'Upsample': convert_upsample,
}
Therefore, you can edit the line in here as the correct node_type in the dict (ie when it returns to min/Resize as Min/Upsample). This can easily be done by editing the source file of onnx2keras. It may also ask you to change the node_params as the Upsample ask for size
param as scales. For detail, pls look in here
For the clip operator, it seems that it support for onnx operator set <=6, where min and max are at the attribute. However, for onnx operator set >= 11, min and max are at inputs, which cause the above KeyError: 'min'
Thank you @dtlam26. I did as you said and added 'Resize': convert_upsample
to the dict. But, I couldn't change it's node_params. I get the following error
ValueError: The 'size' argument must be a tuple of 2 integers. Received: []
Where do I set the size param? I thought for opset>=11
takes it automatically from the input image.
Update: I was able to convert to Keras and compile it on Edge TPU but the Upsample RESIZE_NEAREST_NEIGHBOR
was not mapped on the TPU (after TFLite conversion for running on Coral TPU), saying Operation version not supported
. I know this has nothing to do with onnx2keras but it would be great if someone already has any luck with the same.
Thank you @dtlam26. I did as you said and added
'Resize': convert_upsample
to the dict. But, I couldn't change it's node_params. I get the following error
ValueError: The 'size' argument must be a tuple of 2 integers. Received: []
Where do I set the size param? I thought for
opset>=11
takes it automatically from the input image.Update: I was able to convert to Keras and compile it on Edge TPU but the Upsample
RESIZE_NEAREST_NEIGHBOR
was not mapped on the TPU (after TFLite conversion for running on Coral TPU), sayingOperation version not supported
. I know this has nothing to do with onnx2keras but it would be great if someone already has any luck with the same.
Sorry for informing you this, but for now, Upsampling layer to Coral can't convert totally to Coral TPU as the risk of losing precision to full int of this layer. You can only force to deal with it by setting up your own quantization through the quantization aware training
@e2r-htz
Update: I was able to convert to Keras and compile it on Edge TPU but the Upsample
RESIZE_NEAREST_NEIGHBOR
was not mapped on the TPU (after TFLite conversion for running on Coral TPU), sayingOperation version not supported
. I know this has nothing to do with onnx2keras but it would be great if someone already has any luck with the same.
Can you please share how you were able to solve this?
@dtlam26 when I convert the onnx model to the keras model I did as you said and added 'Resize': convert_upsample to the dict ValueError: The size
argument must be a tuple of 2 integers. Received: []。How can I resovle it。
@dtlam26 when I convert the onnx model to the keras model I did as you said and added 'Resize': convert_upsample to the dict ValueError: The
size
argument must be a tuple of 2 integers. Received: []。How can I resovle it。
As I said, depends on version of your keras, the size
arguments may be exchanged with the scale
arguments in this line. For your sake, you can print out all the params keys and map them correctly by yourself. I know this is tricky, but that is a way to overcome.
may be exchanged with the
thank for your reply but exchanging size by scale did not solve the issue !
@bominn, can you explain where can I make the changes that you suggest to avoid Key Error: 'min' ?
I was able to solve the problem.
The convert_clip()
expects an key min
inside the params
. But current ONNX, the min and max value are passed as inputs (not as attribute).
So, we can add the params["min"]
and params["max"]
before they are called.
operation_layers.py
file. (may be located at .../envs/.../lib/python3.9/site-packages/onnx2keras/operation_layers.py
. or use VS Code navigator)convert_clip()
method, add these following lines at the beginning of the convert_clip() method
def convert_clip(node, params, layers, lambda_func, node_name, keras_name):
if len(node.input) == 3:
params["min"] = ensure_numpy_type(layers[node.input[1]]).astype(int)
params["max"] = ensure_numpy_type(layers[node.input[2]]).astype(int)
else:
# you can raise Exception here to make sure the above assignments are happening always.
pass
For people with the resize problem, changing this line
from
scale = np.uint8(layers[node.input[1]][-2:])
to
scale = np.uint8(layers[node.input[-1]][-2:])
solved it for me.
Generally speaking how to maybe solve this problem for other operators: Visualise your onnx model in Netron to get the node number of the params that the layer needs and then debug with pycharm or something else into the code to see how you can use this information. Not an expert of onnx, but apparently parameters for certain layers are also stored as nodes, like in the resize case for the parameters how much the layer is supposed to upscale.
I was able to solve the problem. The
convert_clip()
expects an keymin
inside theparams
. But current ONNX, the min and max value are passed as inputs (not as attribute).So, we can add the
params["min"]
andparams["max"]
before they are called.1. Open `operation_layers.py` file. (may be located at `.../envs/.../lib/python3.9/site-packages/onnx2keras/operation_layers.py`. or use VS Code navigator) 2. In the `convert_clip()` method, add these following lines at the **beginning of the convert_clip() method**
def convert_clip(node, params, layers, lambda_func, node_name, keras_name): if len(node.input) == 3: params["min"] = ensure_numpy_type(layers[node.input[1]]).astype(int) params["max"] = ensure_numpy_type(layers[node.input[2]]).astype(int) else: # you can raise Exception here to make sure the above assignments are happening always. pass
Thank you @uzzal-podder, your solution worked on my side. As you said the error comes from the fact that convert_clip()
expects min
and max
as attributes, but receives it as inputs. This is visible in the following figures produced with netron.app
. I exported a torch model to onnx with torch.onnx.export
. On the left-hand side it uses opset_version=7
and on the right-hand side opset_version=11
.