Closed B-Willems closed 2 years ago
Looks like the last layer in your model is called dense_3
, which we need to tell the Tensil compiler about since it assumes the last output is called Identity
, which is usually the case with converted models. Can you try running this compile command instead?
tensil compile -a /demo/arch/pynqz2.tarch -m model.onnx -o "dense_3" -s true
You can use the Netron viewer to see what the output node is called.
By the way, it looks like you have one sigmoid activation right at the end there which we currently don't support unless you use the combined tflite/tensil approach (https://www.tensil.ai/docs/tutorials/yolo-ultra96v2/). Could you try switching that to ReLU too?
output = Dense(1, activation="relu")(dense)
This did in fact solve the issue, thank you.
However when executing the compile command now, i get an error that matmul layers are not supported, is this an issue that will be solved by changing the activation to relu instead of sigmoid, or is there another fix needed to make this model work?
That's a separate thing - matmul layers are supported but sometimes the ONNX conversion does an odd renaming. @petrohi do you recall what the fix was for this?
Tensil ONNX compiler does not support MatMul
. It supports Gemm
, which is a combination of MatMul
and Add
(matrix multiply with bias). To make the TF-to-ONNX converter produce Gemm
you need to freeze dimensions by specifying input shape. For your model it looks like you'd need to add --inputs X_img:0[1,75,75,4]
argument. We also recommend using ONNX 11 opset (--opset 11
).
The --inputs X_img:0[1,75,75,4] argument does not seem to work. It throws an error:
AssertionError: X_img is not in graph
error log listed below (sorry for image but code block didn't format right):
perhaps this question is better asked on the tf2onnx github page though? But it seems you have lots of experience with this sort of thing.
Please check what is the name of the input node in your TF model (in Netron) and try using it instead of X_img
.
Alright, in netron the name of the input node was also X_img. However when in remove the name = 'X_img'
argument from the input layer and execute following command it exports correctly, with gemm layers instead of matmul and add layers. Thank you very much for the support. I'll close this issue since the problem is solved.
Hi, I am facing this problem when trying to compile my model to tensil. Could anyone help me, please!!!
I made a tensorflow model and converted it to .onnx file using tf2onnx.
!python -m tf2onnx.convert --saved-model "my_modeltwo" --output "model.onnx"
When trying to generate the .tdata, .tprog and .tmodel files only the first two are generated. When executing the command i get following exception:![image](https://user-images.githubusercontent.com/15821611/165518034-7f15b0a7-c958-4668-9834-ae53f6a1bc37.png)
My tensorflow network: `def create_model(optimizer): input_img = Input(shape=(75, 75, 4), name="X_img")
model = create_model(optimizer = Adam()) model.summary()`
My onnx model: https://drive.google.com/file/d/1-lVduUVw8qdOREVN3mjex2fGQziUVPL4/view?usp=sharing