Open jordond opened 1 month ago
š Hello @jordond, thank you for your interest in Ultralytics YOLOv8 š! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a š Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ā Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord š§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@jordond hello! š Thanks for reaching out with your question.
It looks like the error you're encountering with MediaPipe is due to it expecting a segmentation model with a single output. However, our exported .tflite
model might have multiple outputs which aren't compatible with MediaPipe's constraints.
One potential solution you can explore is to modify the output of your trained model before the export process to ensure that it conforms to the single-output requirement. This process involves adjusting the model architecture slightly or manipulating the output layers, which unfortunately isn't straightforward with the current YOLOv8 CLI tools as they don't provide direct support for altering the model architecture post-training for export.
As an alternative workaround, consider processing the model's output in your Android application to extract the required segmentation output before passing it to MediaPipe. Essentially, you'd be adapting your Android code to handle the multiple outputs from the YOLO model and then streamline it to a single output for MediaPipe to process further.
Keep in mind that these approaches may require some custom development and testing to ensure compatibility and optimal performance with MediaPipe. We always recommend thoroughly testing the model in your specific application scenario after making these adjustments.
If you require further assistance or have more specific needs regarding model architecture modification, our team and the community are here to help in the discussions!
Thanks again for using YOLOv8, and best of luck with your project! š
Search before asking
Question
I have trained a yolov8 model with my own data using the following command:
Then I exported it to a
.tflite
model using this command:I then tried to load it in the MediaPipe Image segmentation example. But when loading the model I get the following error:
Any idea how I can modify the train or export command to make it compatible with MediaPipe?
Additional
No response