ultralytics / ultralytics

NEW - YOLOv8 πŸš€ in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
26.47k stars 5.27k forks source link

MacOS error with TFLite model inference end2end model #13436

Closed Burhan-Q closed 3 days ago

Burhan-Q commented 1 month ago

Search before asking

YOLOv8 Component

No response

Bug

While working on #13113 found that TFLite exported YOLOv10 end2end=True fails to run inference with this error:

PyTorch: starting from 'yolov10n.pt' with input shape (1, 3, 160, 160) BCHW and output shape(s) (1, 300, 6) (5.6 MB)

TensorFlow SavedModel: starting export with tensorflow 2.16.1...

ONNX: starting export with onnx 1.16.1 opset 17...
ONNX: export success βœ… 0.8s, saved as 'yolov10n.onnx' (8.9 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.22.3...
W0000 00:00:1717769785.349377   41870 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1717769785.349449   41870 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
W0000 00:00:1717769789.297754   41870 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1717769789.297786   41870 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
TensorFlow SavedModel: export success βœ… 29.5s, saved as 'yolov10n_saved_model' (23.3 MB)

TensorFlow Lite: starting export with tensorflow 2.16.1...
TensorFlow Lite: export success βœ… 0.0s, saved as 'yolov10n_saved_model/yolov10n_float32.tflite' (9.3 MB)

Export complete (29.6s)
Results saved to /Users/runner/work/ultralytics/ultralytics
Predict:         yolo predict task=detect model=yolov10n_saved_model/yolov10n_float32.tflite imgsz=160  
Validate:        yolo val task=detect model=yolov10n_saved_model/yolov10n_float32.tflite imgsz=160 data=None  
Visualize:       https://netron.app/
Loading yolov10n_saved_model/yolov10n_float32.tflite for TensorFlow Lite inference...
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Traceback (most recent call last):
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/utils/benchmarks.py", line 126, in benchmark
    exported_model.predict(ASSETS / "bus.jpg", imgsz=imgsz, device=device, half=half)
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/engine/model.py", line 450, in predict
    self.predictor.setup_model(model=self.model, verbose=is_cli)
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/engine/predictor.py", line 298, in setup_model
    self.model = AutoBackend(
                 ^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/nn/autobackend.py", line 350, in __init__
    interpreter.allocate_tensors()  # allocate
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tensorflow/lite/python/interpreter.py", line 531, in allocate_tensors
    return self._interpreter.AllocateTensors()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: tensorflow/lite/kernels/batch_matmul.cc:384 accum_dim_lhs != accum_dim_rhs (25 != 32)Node number 119 (BATCH_MATMUL) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/cfg/__init__.py", line 602, in <module>
    entrypoint(debug="")
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/cfg/__init__.py", line 583, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/engine/model.py", line 569, in benchmark
    return benchmark(
           ^^^^^^^^^^
  File "/Users/runner/work/ultralytics/ultralytics/ultralytics/utils/benchmarks.py", line 139, in benchmark
    assert type(e) is AssertionError, f"Benchmark failure for {name}: {e}"
           ^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Benchmark failure for TensorFlow Lite: tensorflow/lite/kernels/batch_matmul.cc:384 accum_dim_lhs != accum_dim_rhs (25 != 32)Node number 119 (BATCH_MATMUL) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0.
Error: Process completed with exit code 1.

Environment

Ultralytics YOLOv8.2.28 πŸš€ Python-3.11.9 torch-2.3.1 CPU (Apple M1 (Virtual))
Setup complete βœ… (3 CPUs, 7.0 GB RAM, 205.1/294.5 GB disk)

OS                  macOS-14.5-arm64-arm-64bit
Environment         Darwin
Python              3.11.9
Install             git
RAM                 7.00 GB
CPU                 Apple M1 (Virtual)
CUDA                None

matplotlib          βœ… 3.9.0>=3.3.0
opencv-python       βœ… 4.10.0.82>=4.6.0
pillow              βœ… 10.3.0>=7.1.2
pyyaml              βœ… 6.0.1>=5.3.1
requests            βœ… 2.32.3>=2.[23](https://github.com/ultralytics/ultralytics/actions/runs/9410214907/job/25945457316?pr=13113#step:5:24).0
scipy               βœ… 1.13.1>=1.4.1
torch               βœ… 2.3.1>=1.8.0
torchvision         βœ… 0.18.1>=0.9.0
tqdm                βœ… 4.66.4>=4.64.0
psutil              βœ… 5.9.8
py-cpuinfo          βœ… 9.0.0
pandas              βœ… 2.2.2>=1.1.4
seaborn             βœ… 0.13.2>=0.11.0
ultralytics-thop    βœ… 0.2.7>=0.2.5

RUNNER_OS: macOS
GITHUB_EVENT_NAME: pull_request
GITHUB_WORKFLOW: Ultralytics CI

Minimal Reproducible Example

Benchmark CI test on MacOS 14 GitHub runner.

Additional

Issue is only present for the code used in #13113 (at most e2b3b58) only on MacOS, for YOLOv10n (end-2-end model). Ubuntu and Windows don't appear to have this issue.

Are you willing to submit a PR?

github-actions[bot] commented 2 weeks ago

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐