Closed Larvouu closed 1 year ago
👋 Hello @Larvouu, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
@Larvouu hello! Thank you for reaching out to us regarding YOLOv5. It's great to hear that you were able to export your model successfully.
Poor performance on mobile devices could be due to various reasons such as hardware limitations, suboptimal implementation of the model on the device, or even model architecture choices that might be computationally heavy.
In regards to your issue with detect.py
, if you were able to get successful detections using best.pt
but not using your exported .mlmodel
, this could suggest that there could be a problem with how you exported the model to that format or that the model architecture might not be compatible with Core ML.
With the current information provided, it might be hard to give a pinpoint solution. However, here are some things you could check for:
.mlmodel
are set correctly. They have to be compatible with the Core ML model format.I hope this helps! Let us know if you have any further questions or need additional assistance.
@glenn-jocher To bring more information, I tried with two different export.py.
For the first one, I use this command : python export_iphone.py --weights new_yolov5/runs/train/exp16_ober/weights/best.pt --train --img 320
This one is from an old export.py (I think the parameter --train doesn't exist anymore in the recent detect.py). Here is what I get from the console :
And this is the model I get :
This is the model I tried with detect.py and got no detection (but no error).
I then thought that maybe it was because I used an old export.py and that the structure of my model wasn't right. So I tried with the most recent export.py on the same best.pt using this command: python export.py --weights runs/train/exp16_ober/weights/best.pt --include coreml --img 320
This is what I get from the console:
Everything seems to went smoothly and I got no error. But then I tried detect.py on this .mlmodel and got this unexpected error :
I tried to look for a solution but didn't find anything related at all. In addition I didn't find a way to implement this model into an app as its structure is different from the other one. Here is the structure of it using the latest yolov5 pull and so the latest export.py:
I also looked the default yolov5n.mlmodel in the official documentation and it also has a different structure that my two other ones. I'm really confused right now and I can't really see what I can do to know what structure of mlmodel is the good one and how to implement it in my app and why I can't manage to try them with detect.py.
Thank you and sorry for the long answer, I tried to bring as much information as possible
Hello @Larvouu! Thank you for providing a detailed explanation of the issue you are facing while trying to use your exported ML model with detect.py
. Based on the information you have provided, it seems like the exported ML model might not be in a format that is compatible with Core ML.
To ensure that the exported model has been successfully converted to the Core ML format, you can follow these steps:
.mlmodel
file in Xcode and inspect the input and output formats of the layers. detect.py
script. To make it easier to integrate your model into your app, you can use a mobile deep learning framework that is compatible with Apple devices, such as Core ML.
Regarding the differences in model structure between the exported models, it's important to note that the structure of an exported model depends on the export script being used. We recommend using the most recent version of the export script to ensure compatibility with the latest version of YOLOv5.
I hope this information helps! Please let us know if you have any further questions or need additional assistance.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
YOLOv5 Component
Detection
Bug
Hello, i have successfully exported my model to the .mlmodel format. However, the performances on my iPhone are quite poor (but still working). I have tried to use detect.py to see if the problem comes from my model or from the implementation of the model in my app. I got not a single detection from it even if I put --conf-thres 0.01 (see the screenshot). I tried detect.py with my best.pt of the same train and it works perfectly fine.
Any clue on this one ?
Environment
YOLOv5n Python-3.10.9 torch-2.0.0 CPU Mac OS 13.2.1
Minimal Reproducible Example
python detect.py --weights best_ober.mlmodel --source ./data/images/ --conf-thres 0.01 --img 320
Additional
No response
Are you willing to submit a PR?