Closed ARusDian closed 4 months ago
π Hello @ARusDian, thank you for your interest in Ultralytics YOLOv8 π! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord π§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@ARusDian hello,
Thank you for providing detailed information about the issue you're encountering while exporting the YOLOv8n model to EdgeTPU format. It appears that the error is related to the onnx2tf
conversion process, specifically with handling intermediate Keras symbolic inputs/outputs.
To help resolve this, please follow these steps:
Ensure Latest Versions: Verify that you are using the latest versions of all relevant packages, including onnx
, tensorflow
, onnx2tf
, and coremltools
. This can often resolve compatibility issues.
Static Shape Conversion: The error message suggests using the -b
or -ois
options to rewrite dynamic dimensions to static shapes. This can help in resolving issues related to dynamic dimensions in the ONNX model. You can try this by modifying the conversion command to include these options.
Parameter Replacement: The error also points to a potential solution involving parameter replacement. You can refer to the onnx2tf parameter replacement guide for detailed instructions on how to handle this.
Custom Keras Layer: As a workaround, you can encapsulate the problematic operation within a custom Keras layer. This involves creating a custom layer that performs the operation and then using this layer in your model.
Here is a minimal example of how you might define a custom Keras layer:
import tensorflow as tf
from tensorflow.keras.layers import Layer
class CustomResizeLayer(Layer):
def __init__(self, **kwargs):
super(CustomResizeLayer, self).__init__(**kwargs)
def call(self, inputs):
return tf.compat.v1.image.resize_nearest_neighbor(inputs, size=(20, 20))
# Usage in your model
inputs = tf.keras.Input(shape=(None, None, 256))
x = CustomResizeLayer()(inputs)
model = tf.keras.Model(inputs, x)
Reproducible Example: If the issue persists, please provide a minimal reproducible example that demonstrates the problem. This will help us diagnose and address the issue more effectively. You can find guidelines for creating a reproducible example here.
Check for Known Issues: Review the onnx2tf GitHub issues for similar problems and potential solutions.
If you continue to experience difficulties, please update this thread with any new findings or additional error messages. We appreciate your patience and cooperation in resolving this issue.
pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3
Hello @Y-T-G,
Thank you for your suggestion to install specific versions of tensorflow
, tf-keras
, and onnx2tf
. Ensuring compatibility between these packages is indeed crucial for a successful export process.
If you haven't already, please verify that the issue persists with the latest versions of these packages. Sometimes, newer versions include important bug fixes and improvements that can resolve such issues.
Additionally, if the problem continues, providing a minimal reproducible example would be incredibly helpful. This allows us to better understand the context and specifics of the issue you're facing. You can find guidelines for creating a reproducible example here.
Hereβs a quick summary of steps you can take:
tensorflow
, tf-keras
, and onnx2tf
.-b
or -ois
options to rewrite dynamic dimensions to static shapes during the conversion process.If you need further assistance, feel free to share more details or any additional error messages you encounter. We're here to help! π
pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3
this one is working nicely but i need to upgrade my python to 2.10, welp i guess it is alright
so i make 2 different environment one for convert and and for running edge TPU
thanks guys
Hello @ARusDian,
I'm glad to hear that the suggested versions worked for you! π Creating separate environments for conversion and running on the Edge TPU is a smart approach to manage dependencies effectively.
If you encounter any further issues or have additional questions, feel free to reach out. We're here to help! π
Best of luck with your project!
pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3
t want to know the success tftile converion . Can you tell me your python version and post the pip list ,thank you very much!
i want to know the success tftile converion . Can you tell me your python version and post the pip list ,thank you very much!
pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3
i want to know the success tftile converion . Can you tell me your python version and post the pip list ,thank you very much!
I'm unable to provide a specific pip list
, but using Python 3.10 with the mentioned package versions should work for TFLite conversion. If you encounter issues, please let us know!
pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3
this one is working nicely but i need to upgrade my python to 2.10, welp i guess it is alright
so i make 2 different environment one for convert and and for running edge TPU
thanks guys i want to know the success tftile converion . Can you tell me your python version and post the pip list ,thank you very much!
I'm unable to provide a specific
pip list
, but using Python 3.10 with the mentioned package versions should work for TFLite conversion. If you encounter issues, please let us know!I'm unable to provide a specific
pip list
, but using Python 3.10 with the mentioned package versions should work for TFLite conversion. If you encounter issues, please let us know!
I get through it with the following configuration : python39 tensorflow-gpu ==2.9.1 onnx2tf==1.7.7
The logs and results show success, but There was a mistake in the process, as follows: failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
so i want to konw What are the python and dependency version numbers that make ohter people successful?
The logs and results show success, but There was a mistake in the process, as follows: failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
i guess because you installed tf with gpu version, try install cpu version might help, anyways here's my pip list
@ARusDian hi , 1,From your pip list, you can see that you don't have onnx_graphsurgeon installed, but normally, if you don't have onnx_graphsurgeon installed, you get errors when you convert your model to tflite format. I'm not sure how you fix this. 2,when i change tensorflow to cpu, the problem that failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected is solved
I'm unable to provide a specific
pip list
, but using Python 3.10 with the mentioned package versions should work for TFLite conversion. If you encounter issues, please let us know!
@glenn-jocher @ARusDian Thanks for your help, I have successfully converted.PT model to.TFtile format model. Here is my configuration: python3.9.10 onnx==1.14.0 onnx2tf==1.7.7 tensorflow==2.9.1 Of course, many of the pip installation dependencies in this one are incompatible and I have forcibly ignored them. The compatibility of many of yolov8's dependencies is really poor, and there are many things that need to be improved! Thanks again for your help!
I'm unable to provide a specific
pip list
, but using Python 3.10 with the mentioned package versions should work for TFLite conversion. If you encounter issues, please let us know!
@glenn-jocher @ARusDian Thanks for your help, I have successfully converted.PT model to.TFtile format model. Here is my configuration: python3.9.10 onnx==1.14.0 onnx2tf==1.7.7 tensorflow==2.9.1 Of course, many of the pip installation dependencies in this one are incompatible and I have forcibly ignored them. The compatibility of many of yolov8's dependencies is really poor, and there are many things that need to be improved! Thanks again for your help!
i have onnx_graphsurgeon installed, you get errors when you convert your model to tflite format. I'm not sure how you fix this.
the package will auto update it
Of course, many of the pip installation dependencies in this one are incompatible and I have forcibly ignored them.
maybe u can just install it through google collab
Using Google Colab can help manage dependencies more smoothly. Give it a try for a streamlined setup.
Using Google Colab can help manage dependencies more smoothly. Give it a try for a streamlined setup.
@glenn-jocher @ARusDian Thanks for your advice, I will try Google ColabγIf I have any questions, I'm asking for your advice, thanks againγBest wishes to you
You're welcome! Feel free to reach out if you have more questions. Best of luck with your project!
You're welcome! Feel free to reach out if you have more questions. Best of luck with your project!
@glenn-jocher @ARusDian Hello, I have tried model conversion with python3.10 and my configuration is as follows: python3.10, tensorflow==2.17.0,onnx==1.16.0, ultralytics==8.2.91. According to the result, the model conversion is successful, but the printed log has many warnings, as shown below:
ONNX: starting export with onnx 1.16.0 opset 15... WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
I would like to ask you how to resolve these warning logs,I would like to ask you how to solve these warning logs, I would be very grateful for your help!
Warnings during model conversion are common and often don't affect functionality. If the model works as expected, they can usually be ignored. For specific concerns, consider consulting the ONNX documentation or community for guidance on handling these warnings.
Warnings during model conversion are common and often don't affect functionality. If the model works as expected, they can usually be ignored. For specific concerns, consider consulting the ONNX documentation or community for guidance on handling these warnings.
yes, the model works as expected,and often don't affect functionality. as fllows:
Thanks for your help!
You're welcome! If everything is functioning as expected, there's no need for concern. If you have further questions, feel free to ask.
You're welcome! Feel free to reach out if you have any questions. Best of luck with your project!
just wonder :)
will peotry
help ultralytics
to improve dependency management
?
pip install tensorflow==2.16.1 tf-keras==2.16.0 onnx2tf==1.22.3
this one is working nicely but i need to upgrade my python to 2.10, welp i guess it is alright
so i make 2 different environment one for convert and and for running edge TPU
thanks guys
I think you mean python 3.10
not python 2.10
.
You're welcome! Feel free to reach out if you have any questions. Best of luck with your project!
Search before asking
YOLOv8 Component
Export
Bug
got bug but i think it's from onnx to tflite model using raspberry Pi 4 to run on coral usb accelerator.
The Error message said this
please help :)
Environment
Ultralytics YOLOv8.2.49 π Python-3.9.19 torch-2.3.1 CPU (Cortex-A72) Setup complete β (4 CPUs, 7.6 GB RAM, 17.2/58.0 GB disk)
OS Linux-6.6.31+rpt-rpi-v8-aarch64-with-glibc2.36 Environment Linux Python 3.9.19 Install git RAM 7.63 GB CPU Cortex-A72 CUDA None
numpy β 1.24.3<2.0.0,>=1.23.0 matplotlib β 3.9.1>=3.3.0 opencv-python β 4.10.0.84>=4.6.0 pillow β 10.4.0>=7.1.2 pyyaml β 6.0.1>=5.3.1 requests β 2.32.3>=2.23.0 scipy β 1.13.1>=1.4.1 torch β 2.3.1>=1.8.0 torchvision β 0.18.1>=0.9.0 tqdm β 4.66.4>=4.64.0 psutil β 6.0.0 py-cpuinfo β 9.0.0 pandas β 2.2.2>=1.1.4 seaborn β 0.13.2>=0.11.0 ultralytics-thop β 2.0.0>=2.0.0
also i'm using tensorflow 2.13.1 and tensorflow-aarch64 2.13.1
Minimal Reproducible Example
Additional
Here's the full Log :
Are you willing to submit a PR?