onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.27k stars 296 forks source link

Converting Resize node fails with KeyWord="" Error #1053

Open Pensarfeo opened 1 year ago

Pensarfeo commented 1 year ago

Describe the bug In handlers/backend/resize.py, version_11 handler, the method checks for the roi tensor, but this (I think) will only exists when doing a "tf_crop_and_resize" operation, however when doing other normal resize this is not an input to the resize node and so the input to the node from the onnx graph is "" an empty string.

Converting to tf fails when converting the resize nodes and gives an wrong keyword error

Solution

The solution is to move the following lines

      roi = tensor_dict[node.inputs[1]]
      roi_dtype = roi.dtype

from the top of function version_11 to within the if condition

    if coordinate_transformation_mode == "tf_crop_and_resize":
      roi = tensor_dict[node.inputs[1]]
      roi_dtype = roi.dtype

This leaves the rest of the code unchanged and fixes the bug

To Reproduce

ONNX model file

Python, ONNX, ONNX-TF, Tensorflow version

PINTO0309 commented 1 year ago

There are significant changes in the combination of opset<=10 and opset>=11 input values and attributes for onnx, so some opsets may work correctly and others may not. I think your point is correct.

https://github.com/onnx/onnx/blob/main/docs/Changelog.md#resize-10 image

https://github.com/onnx/onnx/blob/main/docs/Changelog.md#resize-11 image

Also, unfortunately onnx-tensorflow appears to have almost stopped maintenance.

Since you have not provided a link to the sample onnx file in question, I can't try it, but you can try to see if you can convert it without error with the tool I am creating. https://github.com/PINTO0309/onnx2tf

Pensarfeo commented 1 year ago

@PINTO0309, thanks for the answer Hi, if you want tot try it you can export the yolov8 model to onnx as explained here: https://github.com/ultralytics/ultralytics#python

This is the code

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
results = model.train(data="coco128.yaml", epochs=3)  # train the model
results = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
success = model.export(format="onnx")  # export the model
PINTO0309 commented 1 year ago

The conversion has already been successful.

Pensarfeo commented 1 year ago

@PINTO0309 With this repo or onnx2tf?

PINTO0309 commented 1 year ago

onnx2tf

Pensarfeo commented 1 year ago

@PINTO0309 Thanks!!!! I tried it too, and it works faster than before! Btw, any tips to optimize conversion? Anything else I can do to speedup the process?

PINTO0309 commented 1 year ago

I have already implemented all the optimization tricks that most engineers probably do not practice. You are probably already in a position to benefit from them.