ultralytics / ultralytics

Ultralytics YOLO11 πŸš€
https://docs.ultralytics.com
GNU Affero General Public License v3.0
38.57k stars 7.47k forks source link

Yolo11 `export` to tflite/tfjs not working on Windows #19688

Open enusbaum opened 1 week ago

enusbaum commented 1 week ago

Search before asking

Ultralytics YOLO Component

Export

Bug

I'm trying to export a yolo11n model I trained earlier today on a custom training dataset. Running detect works and it's accurate in detecting the objects I've specified against test images.

When running export, even after a clean install following the first few simple steps -- the export function is not properly executing to either tflite or tfjs. I suspect there might be a weird dependency issue happening, but as you can see from my environment setup, it's very, very basic.

Environment

Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4090 Laptop GPU, 16376MiB)
Setup complete βœ… (32 CPUs, 95.8 GB RAM, 842.4/1844.6 GB disk)

OS                  Windows-10-10.0.26100-SP0
Environment         Windows
Python              3.11.9
Install             pip
Path                C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics
RAM                 95.77 GB
Disk                842.4/1844.6 GB
CPU                 Intel Core(TM) i9-14900HX
CPU count           32
GPU                 NVIDIA GeForce RTX 4090 Laptop GPU, 16376MiB
GPU count           1
CUDA                12.6

numpy               βœ… 2.1.1<=2.1.1,>=1.23.0
matplotlib          βœ… 3.10.1>=3.3.0
opencv-python       βœ… 4.11.0.86>=4.6.0
pillow              βœ… 11.0.0>=7.1.2
pyyaml              βœ… 6.0.2>=5.3.1
requests            βœ… 2.32.3>=2.23.0
scipy               βœ… 1.15.2>=1.4.1
torch               βœ… 2.6.0+cu126>=1.8.0
torch               βœ… 2.6.0+cu126!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision         βœ… 0.21.0+cu126>=0.9.0
tqdm                βœ… 4.67.1>=4.64.0
psutil              βœ… 7.0.0
py-cpuinfo          βœ… 9.0.0
pandas              βœ… 2.2.3>=1.1.4
seaborn             βœ… 0.13.2>=0.11.0
ultralytics-thop    βœ… 2.0.14>=2.0.0

Minimal Reproducible Example

My initial setup is very simple,

I can confirm my model is working properly and that I'm able to run inference against my model with a test image that has the object detected successfully:

(venv) C:\Users\eric\source\yolo>yolo detect predict model=c:\temp\best.pt source=c:\Users\eric\Desktop\test.jpg
Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4090 Laptop GPU, 16376MiB)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs

image 1/1 c:\Users\eric\Desktop\test.jpg: 640x480 1 object, 35.9ms
Speed: 2.5ms preprocess, 35.9ms inference, 56.9ms postprocess per image at shape (1, 3, 640, 480)
Results saved to runs\detect\predict
πŸ’‘ Learn more at https://docs.ultralytics.com/modes/predict

From this point, I have my model (best.pt), and I'm going to export it to tflite or tfjs.

First Command: yolo export model=c:\temp\best.pt format=tflite

(venv) C:\Users\eric\source\yolo>yolo version
8.3.89

(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs

PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
requirements: Ultralytics requirement ['tensorflow-cpu>=2.0.0'] not found, attempting AutoUpdate...

(venv) C:\Users\eric\source\yolo>ERROR: Pipe to stdout was broken
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument

Second time running: yolo export model=c:\temp\best.pt format=tflite

(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs

PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
requirements: Ultralytics requirements ['tf_keras', 'sng4onnx>=1.0.1', 'onnx_graphsurgeon>=0.3.26', 'onnx>=1.12.0', 'onnx2tf>1.17.5,<=1.26.3', 'onnxslim>=0.1.31', 'tflite_support', 'onnxruntime'] not found, attempting AutoUpdate...

(venv) C:\Users\eric\source\yolo>ERROR: Pipe to stdout was broken
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument

After manually installing the required packages via pip: pip install --no-cache-dir "tf_keras" "sng4onnx>=1.0.1" "onnx_graphsurgeon>=0.3.26" "onnx>=1.12.0" "onnx2tf>1.17.5,<=1.26.3" "onnxslim>=0.1.31" "tflite_support" "onnxruntime" --extra-index-url https://pypi.ngc.nvidia.com

When I run the export command again, the process dies quietly:

(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs

PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)

TensorFlow SavedModel: starting export with tensorflow 2.19.0...

(venv) C:\Users\eric\source\yolo>

I've reinstalled and tried this process over four times now with the exact same results.

Additional

No response

Are you willing to submit a PR?

UltralyticsAssistant commented 1 week ago

πŸ‘‹ Hello @enusbaum, thank you for bringing this to our attention and for providing such detailed information about your issue! πŸš€

We recommend reviewing the Docs for potential solutions, particularly the Export Guide which includes detailed instructions and examples for exporting models to various formats like TFLite and TFJS.

If this is a πŸ› Bug Report, please ensure you have provided a minimum reproducible example (MRE). From your description, you’ve already detailed your environment and steps, which is helpfulβ€”thank you! If possible, please also share the specific model file (best.pt) or a simplified version of the dataset and code to ensure we can replicate the issue accurately.

Here are some additional steps you can take to troubleshoot:

  1. Upgrade to the latest ultralytics version and verify all dependencies are properly installed:
    pip install -U ultralytics
  2. Ensure your Python environment meets the requirements listed in the pyproject.toml.
  3. If you’re using Windows, consider running the export in a clean environment or on a different OS (e.g., in a cloud-based environment) for comparison.

Environments

YOLO models can be run and exported in any of the following verified environments. If you're encountering issues locally, consider testing in one of these:

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are passing, which verifies correct operation of all YOLO Modes and Tasks.

An Ultralytics engineer will review your issue and respond soon. In the meantime, feel free to join the community for additional support:

Thanks for your patience! 😊

Y-T-G commented 1 week ago

You can try this

https://github.com/ultralytics/ultralytics/issues/14235#issuecomment-2211489301

enusbaum commented 1 week ago

You can try this

#14235 (comment)

Hello @Y-T-G ! I had found your post earlier while troubleshooting the issue and unfortunately that does not fix the issue and causes issues of its own.

When installing those specific versions, pip reports back package issues:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-cpu 2.19.0 requires ml-dtypes<1.0.0,>=0.5.1, but you have ml-dtypes 0.3.2 which is incompatible.
tensorflow-cpu 2.19.0 requires tensorboard~=2.19.0, but you have tensorboard 2.16.2 which is incompatible.

And when re-running the export process after installing the specific package versions your post recommended, it'll crash with the following error:

(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs

PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
TensorFlow SavedModel: export failure ❌ 0.0s: module 'tensorflow' has no attribute '__version__'
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\eric\source\yolo\venv\Scripts\yolo.exe\__main__.py", line 7, in <module>
  File "C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics\cfg\__init__.py", line 985, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics\engine\model.py", line 742, in export
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics\engine\exporter.py", line 438, in __call__
    f[5], keras_model = self.export_saved_model()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics\engine\exporter.py", line 182, in outer_func
    raise e
  File "C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics\engine\exporter.py", line 177, in outer_func
    f, model = inner_func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eric\source\yolo\venv\Lib\site-packages\ultralytics\engine\exporter.py", line 1019, in export_saved_model
    LOGGER.info(f"\n{prefix} starting export with tensorflow {tf.__version__}...")
                                                              ^^^^^^^^^^^^^^
AttributeError: module 'tensorflow' has no attribute '__version__'
Y-T-G commented 1 week ago

Do you have a file called tensorflow.py in the same folder?

enusbaum commented 1 week ago

Do you have a file called tensorflow.py in the same folder?

I do not. There is not a tensorflow.py in the folder I'm working in for my venv (c:\users\eric\source\yolo) or in the folder where my model currently is (c:\temp)

Playing around with different package versions and manually fiddling with them, I've got the tflite export working:

(venv) C:\Users\eric\source\yolo>yolo export model=c:\temp\best.pt format=tflite
Ultralytics 8.3.89 πŸš€ Python-3.11.9 torch-2.6.0+cu126 CPU (Intel Core(TM) i9-14900HX)
YOLO12n summary (fused): 159 layers, 2,556,923 parameters, 0 gradients, 6.3 GFLOPs

PyTorch: starting from 'c:\temp\best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 5, 8400) (5.3 MB)
AttributeError: 'MessageFactory' object has no attribute 'GetPrototype'
AttributeError: 'MessageFactory' object has no attribute 'GetPrototype'
AttributeError: 'MessageFactory' object has no attribute 'GetPrototype'
AttributeError: 'MessageFactory' object has no attribute 'GetPrototype'
AttributeError: 'MessageFactory' object has no attribute 'GetPrototype'

TensorFlow SavedModel: starting export with tensorflow 2.18.0...
WARNING:tensorflow:From C:\Users\eric\source\yolo\venv\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success βœ… 1.2s, saved as 'c:\temp\best.onnx' (10.1 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.26.2...
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1741958929.382013   10692 devices.cc:76] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
I0000 00:00:1741958929.382682   10692 single_machine.cc:361] Starting new session
W0000 00:00:1741958930.593415   10692 tf_tfl_flatbuffer_helpers.cc:365] Ignored output_format.
W0000 00:00:1741958930.593830   10692 tf_tfl_flatbuffer_helpers.cc:368] Ignored drop_control_dependency.
I0000 00:00:1741958932.254326   10692 devices.cc:76] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support)
I0000 00:00:1741958932.254617   10692 single_machine.cc:361] Starting new session
W0000 00:00:1741958933.407578   10692 tf_tfl_flatbuffer_helpers.cc:365] Ignored output_format.
W0000 00:00:1741958933.407709   10692 tf_tfl_flatbuffer_helpers.cc:368] Ignored drop_control_dependency.
TensorFlow SavedModel: export success βœ… 61.1s, saved as 'c:\temp\best_saved_model' (28.7 MB)

TensorFlow Lite: starting export with tensorflow 2.18.0...
TensorFlow Lite: export success βœ… 0.0s, saved as 'c:\temp\best_saved_model\best_float32.tflite' (10.4 MB)

Export complete (61.4s)
Results saved to C:\temp
Predict:         yolo predict task=detect model=c:\temp\best_saved_model\best_float32.tflite imgsz=640
Validate:        yolo val task=detect model=c:\temp\best_saved_model\best_float32.tflite imgsz=640 data=data.yaml
Visualize:       https://netron.app
πŸ’‘ Learn more at https://docs.ultralytics.com/modes/export

Image

Here's my current package list for this environment:

absl-py==2.1.0
astunparse==1.6.3
cachetools==5.5.2
certifi==2025.1.31
charset-normalizer==3.4.1
chex==0.1.89
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.3.1
cycler==0.12.1
etils==1.12.2
filelock==3.13.1
flatbuffers==25.2.10
flax==0.10.4
fonttools==4.56.0
fsspec==2024.6.1
gast==0.6.0
google-auth==2.38.0
google-auth-oauthlib==1.2.1
google-pasta==0.2.0
grpcio==1.71.0
h5py==3.13.0
humanfriendly==10.0
humanize==4.12.1
idna==3.10
importlib_resources==6.5.2
jax==0.4.34
jaxlib==0.4.34
Jinja2==3.1.4
keras==3.9.0
kiwisolver==1.4.8
libclang==18.1.1
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.10.1
mdurl==0.1.2
ml-dtypes==0.4.1
mpmath==1.3.0
msgpack==1.1.0
namex==0.0.8
nest-asyncio==1.6.0
networkx==3.3
numpy==1.26.4
oauthlib==3.2.2
onnx==1.17.0
onnx2tf==1.26.2
onnx_graphsurgeon==0.5.6
onnxruntime==1.21.0
onnxslim==0.1.48
opencv-python==4.11.0.86
opt_einsum==3.4.0
optax==0.2.4
optree==0.14.1
orbax-checkpoint==0.11.5
packaging==23.2
pandas==2.2.3
pillow==11.0.0
protobuf==6.30.1
psutil==7.0.0
py-cpuinfo==9.0.0
pyasn1==0.6.1
pyasn1_modules==0.4.1
pybind11==2.13.6
Pygments==2.19.1
pyparsing==3.2.1
pyreadline3==3.5.4
python-dateutil==2.9.0.post0
pytz==2025.1
PyYAML==6.0.2
requests==2.32.3
requests-oauthlib==2.0.0
rich==13.9.4
rsa==4.9
scipy==1.15.2
seaborn==0.13.2
simplejson==3.20.1
six==1.17.0
sng4onnx==1.0.4
sympy==1.13.1
tensorboard==2.18.0
tensorboard-data-server==0.7.2
tensorflow==2.18.1
tensorflow-decision-forests==1.8.1
tensorflow-estimator==2.15.0
tensorflow-hub==0.16.1
tensorflow-io-gcs-filesystem==0.31.0
tensorflow_cpu==2.18.1
tensorflow_intel==2.18.0
tensorflowjs==4.21.0
tensorstore==0.1.72
termcolor==2.5.0
tf_keras==2.18.0
tflite-support==0.1.0a1
toolz==1.0.0
torch==2.6.0+cu126
torchaudio==2.6.0+cu126
torchvision==0.21.0+cu126
tqdm==4.67.1
treescope==0.1.9
typing_extensions==4.12.2
tzdata==2025.1
ultralytics==8.3.89
ultralytics-thop==2.0.14
urllib3==2.3.0
Werkzeug==3.1.3
wrapt==1.14.1
wurlitzer==3.1.1
zipp==3.21.0

That being said, attempting to export to tfjs tries to install tensorflowjs==4.20.0 which appears to depend on tensorflow==2.15.1, tf_keras==2.15.1, etc. which puts everything back into a broken state.

enusbaum commented 1 week ago

It appears the issue might be related to tensorflow-decision-forests in that pip currently only lists the most recent available version as 1.8.1:

(venv) C:\Users\eric\source\yolo>pip index versions tensorflow-decision-forests
WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
tensorflow-decision-forests (1.8.1)
Available versions: 1.8.1
  INSTALLED: 1.8.1
  LATEST:    1.8.1

This version seems to depend on tensorflow==2.15.0 which is causing the tfjs export to fail with the following error:

WARNING:root:TensorFlow Decision Forests 1.8.1 is compatible with the following TensorFlow Versions: ['2.15.0']. However, TensorFlow 2.18.0 was detected. This can cause issues with the TF API and symbols in the custom C++ ops. See the TF and TF-DF compatibility table at https://github.com/tensorflow/decision-forests/blob/main/documentation/known_issues.md#compatibility-table.

I'm curious if this is a 64-bit Windows compatibility issues, as PyPi lists the latest available version of tensorflow-decision-forests as 1.12.0, with I imagine 1.11.0 being the best compatible version for the 2.18.0 branch of tf (and other libraries)

glenn-jocher commented 1 week ago

Thanks for sharing these details. The tensorflow-decision-forests package isn't required for Ultralytics exports and appears to be causing version conflicts. Let's simplify the environment:

pip uninstall tensorflow-decision-forests
pip install "tensorflow-cpu>=2.15.0" "tf_keras>=2.15.0" "tensorflowjs>=4.20.0"

For TF.js exports on Windows, we recommend using a clean environment with only the required dependencies. The Ultralytics TF SavedModel guide shows the specific package versions needed for successful exports.

JorisOpsommer commented 1 week ago

@enusbaum you've probably found it by yourself but I've used your config and completed it

pip install --upgrade -r requirements.txt

gitpython matplotlib==3.10.1 opencv-python==4.11.0.86 pillow==11.0.0 psutil==7.0.0 PyYAML==6.0.2 requests scipy==1.15.2 ultralytics==8.3.89 ultralytics-thop==2.0.14 tqdm==4.67.1 pandas==2.2.3 seaborn==0.13.2 numpy==1.26.4 ml-dtypes==0.4.1 protobuf==5.29.3 keras==3.9.0 tf_keras==2.18.0 tensorflow==2.18.1 tensorboard==2.18.0 tensorflow-cpu>=2.15.0 sng4onnx==1.0.4 onnx_graphsurgeon==0.5.6 onnx==1.17.0 onnx2tf==1.26.2 onnxslim==0.1.48 onnxruntime==1.21.0 tflite_support onnxruntime-gpu

This works here.