ultralytics / ultralytics

NEW - YOLOv8 πŸš€ in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
25.94k stars 5.17k forks source link

Error Export default YOLOv8n Model to edgeTPU Format (onnx2tf issue i think) #14235

Open ARusDian opened 4 days ago

ARusDian commented 4 days ago

Search before asking

YOLOv8 Component

Export

Bug

got bug but i think it's from onnx to tflite model using raspberry Pi 4 to run on coral usb accelerator.

The Error message said this

raise TypeError(
TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(1, 20, 20, 256), dtype=tf.float32, name=None), name='tf.math.multiply_119/Mul:0', description="created by layer 'tf.math.multiply_119'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.

ERROR: input_onnx_file_path: yolov8n.onnx
ERROR: onnx_op_name: wa/model.10/Resize
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.

please help :)

Environment

Ultralytics YOLOv8.2.49 πŸš€ Python-3.9.19 torch-2.3.1 CPU (Cortex-A72) Setup complete βœ… (4 CPUs, 7.6 GB RAM, 17.2/58.0 GB disk)

OS Linux-6.6.31+rpt-rpi-v8-aarch64-with-glibc2.36 Environment Linux Python 3.9.19 Install git RAM 7.63 GB CPU Cortex-A72 CUDA None

numpy βœ… 1.24.3<2.0.0,>=1.23.0 matplotlib βœ… 3.9.1>=3.3.0 opencv-python βœ… 4.10.0.84>=4.6.0 pillow βœ… 10.4.0>=7.1.2 pyyaml βœ… 6.0.1>=5.3.1 requests βœ… 2.32.3>=2.23.0 scipy βœ… 1.13.1>=1.4.1 torch βœ… 2.3.1>=1.8.0 torchvision βœ… 0.18.1>=0.9.0 tqdm βœ… 4.66.4>=4.64.0 psutil βœ… 6.0.0 py-cpuinfo βœ… 9.0.0 pandas βœ… 2.2.2>=1.1.4 seaborn βœ… 0.13.2>=0.11.0 ultralytics-thop βœ… 2.0.0>=2.0.0

also i'm using tensorflow 2.13.1 and tensorflow-aarch64 2.13.1

Minimal Reproducible Example

from ultralytics import YOLO

model = YOLO("yolov8n.pt") 
model.export(format="edgetpu")

Additional

Here's the full Log :

Ultralytics YOLOv8.2.49 πŸš€ Python-3.9.19 torch-2.3.1 CPU (Cortex-A72)
YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from 'yolov8n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)

TensorFlow SavedModel: starting export with tensorflow 2.13.1...

ONNX: starting export with onnx 1.16.1 opset 17...
ONNX: slimming with onnxslim 0.1.31...
ONNX: export success βœ… 7.2s, saved as 'yolov8n.onnx' (12.3 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.20.0...

Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!

Model loaded ========================================================================

Model conversion started ============================================================
ERROR: The trace log is below.
Traceback (most recent call last):
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/onnx2tf/utils/common_functions.py", line 310, in print_wrapper_func
    result = func(*args, **kwargs)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/onnx2tf/utils/common_functions.py", line 383, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/onnx2tf/utils/common_functions.py", line 53, in get_replacement_parameter_wrapper_func
    func(*args, **kwargs)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/onnx2tf/ops/Resize.py", line 417, in make_node
    resized_tensor = Lambda(
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1045, in __call__
    outputs = call_fn(inputs, *args, **kwargs)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/keras/layers/core.py", line 913, in call
    result = self.function(inputs, **kwargs)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/onnx2tf/utils/common_functions.py", line 1142, in upsampling2d_nearest
    return tf.compat.v1.image.resize_nearest_neighbor(
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/ops/image_ops_impl.py", line 4769, in resize_nearest_neighbor
    return gen_image_ops.resize_nearest_neighbor(
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/ops/gen_image_ops.py", line 3858, in resize_nearest_neighbor
    return resize_nearest_neighbor_eager_fallback(
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/ops/gen_image_ops.py", line 3896, in resize_nearest_neighbor_eager_fallback
    _attr_T, (images,) = _execute.args_to_matching_eager([images], ctx, [_dtypes.int8, _dtypes.uint8, _dtypes.int16, _dtypes.uint16, _dtypes.int32, _dtypes.int64, _dtypes.half, _dtypes.float32, _dtypes.float64, _dtypes.bfloat16, ])
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 251, in args_to_matching_eager
    tensor = tensor_conversion_registry.convert(t)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/framework/tensor_conversion_registry.py", line 234, in convert
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py", line 324, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py", line 263, in constant
    return _constant_impl(value, dtype, shape, name, verify_shape=False,
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py", line 275, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py", line 285, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py", line 98, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
  File "/home/dian/.pyenv/versions/3.9.19/lib/python3.9/site-packages/keras/src/engine/keras_tensor.py", line 285, in __array__
    raise TypeError(
TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(1, 20, 20, 256), dtype=tf.float32, name=None), name='tf.math.multiply_119/Mul:0', description="created by layer 'tf.math.multiply_119'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.

ERROR: input_onnx_file_path: yolov8n.onnx
ERROR: onnx_op_name: wa/model.10/Resize
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.

Are you willing to submit a PR?

github-actions[bot] commented 4 days ago

πŸ‘‹ Hello @ARusDian, thank you for your interest in Ultralytics YOLOv8 πŸš€! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 4 days ago

@ARusDian hello,

Thank you for providing detailed information about the issue you're encountering while exporting the YOLOv8n model to EdgeTPU format. It appears that the error is related to the onnx2tf conversion process, specifically with handling intermediate Keras symbolic inputs/outputs.

To help resolve this, please follow these steps:

  1. Ensure Latest Versions: Verify that you are using the latest versions of all relevant packages, including onnx, tensorflow, onnx2tf, and coremltools. This can often resolve compatibility issues.

  2. Static Shape Conversion: The error message suggests using the -b or -ois options to rewrite dynamic dimensions to static shapes. This can help in resolving issues related to dynamic dimensions in the ONNX model. You can try this by modifying the conversion command to include these options.

  3. Parameter Replacement: The error also points to a potential solution involving parameter replacement. You can refer to the onnx2tf parameter replacement guide for detailed instructions on how to handle this.

  4. Custom Keras Layer: As a workaround, you can encapsulate the problematic operation within a custom Keras layer. This involves creating a custom layer that performs the operation and then using this layer in your model.

Here is a minimal example of how you might define a custom Keras layer:

import tensorflow as tf
from tensorflow.keras.layers import Layer

class CustomResizeLayer(Layer):
    def __init__(self, **kwargs):
        super(CustomResizeLayer, self).__init__(**kwargs)

    def call(self, inputs):
        return tf.compat.v1.image.resize_nearest_neighbor(inputs, size=(20, 20))

# Usage in your model
inputs = tf.keras.Input(shape=(None, None, 256))
x = CustomResizeLayer()(inputs)
model = tf.keras.Model(inputs, x)
  1. Reproducible Example: If the issue persists, please provide a minimal reproducible example that demonstrates the problem. This will help us diagnose and address the issue more effectively. You can find guidelines for creating a reproducible example here.

  2. Check for Known Issues: Review the onnx2tf GitHub issues for similar problems and potential solutions.

If you continue to experience difficulties, please update this thread with any new findings or additional error messages. We appreciate your patience and cooperation in resolving this issue.

Y-T-G commented 4 days ago

pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3

glenn-jocher commented 4 days ago

Hello @Y-T-G,

Thank you for your suggestion to install specific versions of tensorflow, tf-keras, and onnx2tf. Ensuring compatibility between these packages is indeed crucial for a successful export process.

If you haven't already, please verify that the issue persists with the latest versions of these packages. Sometimes, newer versions include important bug fixes and improvements that can resolve such issues.

Additionally, if the problem continues, providing a minimal reproducible example would be incredibly helpful. This allows us to better understand the context and specifics of the issue you're facing. You can find guidelines for creating a reproducible example here.

Here’s a quick summary of steps you can take:

  1. Update Packages: Ensure you are using the latest versions of tensorflow, tf-keras, and onnx2tf.
  2. Static Shape Conversion: Use the -b or -ois options to rewrite dynamic dimensions to static shapes during the conversion process.
  3. Custom Keras Layer: Consider encapsulating problematic operations within a custom Keras layer.

If you need further assistance, feel free to share more details or any additional error messages you encounter. We're here to help! 😊