Closed Kaisershmarren closed 3 years ago
π Hello @Kaisershmarren, thank you for your interest in YOLOv5 π! Please visit our βοΈ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a π Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training β Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:
$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
Unfortunately no... But I've just realised I didn't write all the steps I've done. I was trying to solve the problem of absence of classes labels in the iOS coreml model, and I've modified the export.py. I managed to have the classes labels but it came out the error on "740"...
I can't even get the class labels working--how did you do that?
@ngagesmu you should create ClassifierConfig with your own class labels and pass it into convert function
from coremltools.converters import ClassifierConfig
classifier_config = ClassifierConfig(class_labels=['class1', 'class2', 'class3', 'class4'])
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])], classifier_config=classifier_config)
@ngagesmu you should create ClassifierConfig with your own class labels and pass it into convert function
from coremltools.converters import ClassifierConfig classifier_config = ClassifierConfig(class_labels=['class1', 'class2', 'class3', 'class4']) ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])], classifier_config=classifier_config)
@MaxNazarov93 I added the labels as your suggestion, but it also doesn't work. "NSLocalizedDescription = "The size of the output layer 'var_1162' in the neural network does not match the number of classes in the classifier."; }"
I've exported the yolov5s.pt model to CoreML creating yolov5s.mlmodel. I've followed the instructions reported in #251 and #3238. The command I used is the following:
python3 export.py --weights yolov5s.pt --img 640 --batch 1 --train
The conversion has been ok (or at least it's seemed) and created the model (with the right class label).
I've included it in the Xcode project provided by Apple for sampling object detection https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
But when the project runs on the iPhone, it throws the following error and nothing is recognised on the iPhone screen. NSLocalizedDescription=The size of the output layer '740' in the neural network does not match the number of classes in the classifier.
I've also done a check using the YOLOv3 MLModel provided by Apple in its site but it works perfectly:
My environment is:
MacOS Big Sur 11.5.1 Python 3.9 torch 1.9.0 CPU onnx 1.10.0 scikit-learn version 0.24.2
How do you fix it finally?
Hi friends, any news ?
@Kaisershmarren how did you solved this problem ?
@blocks-ai, apologies for the delayed response. It seems like the error you are getting during the export process is due to the size of the output layer being different from the number of classes in the classifier. One possible solution to this issue would be to check whether the classes you are using in your classifier are matching the ones you have in your trained model.
Regarding the issue on iOS, unfortunately, I could not find enough information in your post that would help in debugging the problem. However, some possible ways to resolve this issue include checking to verify that the correct model file is being used in the iOS app, ensuring that the input size of the model matches the input image's size, and verifying the number of output layers and their sizes.
I hope this helps. If you have any further questions, please do not hesitate to ask.
Hi all. Sorry for the late answer⦠I managed to make yolov5 to work with apple vision by using what Leon0402 did in https://github.com/dbsystel/yolov5-coreml-tools.
I suggest you, @glenn-jocher, to use the source code provided by Leon, to integrate the new layers in your export logic for core ml. Thatβs because the object detection model in CoreML is useless if not compliant with vision requirements.
bye
@Kaisershmarren hello,
Thank you for letting us know about your success in using the source code provided by Leon0402 to make YOLOv5 work with Apple Vision. We appreciate your suggestion regarding integrating the new layers in our export logic for Core ML to ensure compliance with Vision requirements.
We are always open to suggestions and contributions from the community, and we will definitely look into it.
Have a great day!
I've exported the yolov5s.pt model to CoreML creating yolov5s.mlmodel. I've followed the instructions reported in #251 and #3238. The command I used is the following:
python3 export.py --weights yolov5s.pt --img 640 --batch 1 --train
The conversion has been ok (or at least it's seemed) and created the model (with the right class label).
I've included it in the Xcode project provided by Apple for sampling object detection https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
But when the project runs on the iPhone, it throws the following error and nothing is recognised on the iPhone screen. NSLocalizedDescription=The size of the output layer '740' in the neural network does not match the number of classes in the classifier.
I've also done a check using the YOLOv3 MLModel provided by Apple in its site but it works perfectly:
My environment is:
MacOS Big Sur 11.5.1 Python 3.9 torch 1.9.0 CPU onnx 1.10.0 scikit-learn version 0.24.2