ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.47k stars 16.28k forks source link

CoreML model error: The size of the output layer '740' in the neural network does not match the number of classes in the classifier #4338

Closed Kaisershmarren closed 3 years ago

Kaisershmarren commented 3 years ago

I've exported the yolov5s.pt model to CoreML creating yolov5s.mlmodel. I've followed the instructions reported in #251 and #3238. The command I used is the following:

python3 export.py --weights yolov5s.pt --img 640 --batch 1 --train

The conversion has been ok (or at least it's seemed) and created the model (with the right class label). image

I've included it in the Xcode project provided by Apple for sampling object detection https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture image

But when the project runs on the iPhone, it throws the following error and nothing is recognised on the iPhone screen. NSLocalizedDescription=The size of the output layer '740' in the neural network does not match the number of classes in the classifier. image

I've also done a check using the YOLOv3 MLModel provided by Apple in its site but it works perfectly:

image

My environment is:

MacOS Big Sur 11.5.1 Python 3.9 torch 1.9.0 CPU onnx 1.10.0 scikit-learn version 0.24.2

github-actions[bot] commented 3 years ago

πŸ‘‹ Hello @Kaisershmarren, thank you for your interest in YOLOv5 πŸš€! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a πŸ› Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

github-actions[bot] commented 3 years ago

πŸ‘‹ Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 πŸš€ resources:

Access additional Ultralytics ⚑ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 πŸš€ and Vision AI ⭐!

Kaisershmarren commented 3 years ago

Unfortunately no... But I've just realised I didn't write all the steps I've done. I was trying to solve the problem of absence of classes labels in the iOS coreml model, and I've modified the export.py. I managed to have the classes labels but it came out the error on "740"...

nathan-gage commented 2 years ago

I can't even get the class labels working--how did you do that?

MaxNazarov93 commented 2 years ago

@ngagesmu you should create ClassifierConfig with your own class labels and pass it into convert function

from coremltools.converters import ClassifierConfig

classifier_config = ClassifierConfig(class_labels=['class1', 'class2', 'class3', 'class4'])
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])], classifier_config=classifier_config)
zjsjayce commented 2 years ago

@ngagesmu you should create ClassifierConfig with your own class labels and pass it into convert function

from coremltools.converters import ClassifierConfig

classifier_config = ClassifierConfig(class_labels=['class1', 'class2', 'class3', 'class4'])
ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])], classifier_config=classifier_config)

@MaxNazarov93 I added the labels as your suggestion, but it also doesn't work. "NSLocalizedDescription = "The size of the output layer 'var_1162' in the neural network does not match the number of classes in the classifier."; }"

zjsjayce commented 2 years ago

I've exported the yolov5s.pt model to CoreML creating yolov5s.mlmodel. I've followed the instructions reported in #251 and #3238. The command I used is the following:

python3 export.py --weights yolov5s.pt --img 640 --batch 1 --train

The conversion has been ok (or at least it's seemed) and created the model (with the right class label). image

I've included it in the Xcode project provided by Apple for sampling object detection https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture image

But when the project runs on the iPhone, it throws the following error and nothing is recognised on the iPhone screen. NSLocalizedDescription=The size of the output layer '740' in the neural network does not match the number of classes in the classifier. image

I've also done a check using the YOLOv3 MLModel provided by Apple in its site but it works perfectly:

image

My environment is:

MacOS Big Sur 11.5.1 Python 3.9 torch 1.9.0 CPU onnx 1.10.0 scikit-learn version 0.24.2

How do you fix it finally?

blocks-ai commented 1 year ago

Hi friends, any news ?

blocks-ai commented 1 year ago

@Kaisershmarren how did you solved this problem ?

glenn-jocher commented 1 year ago

@blocks-ai, apologies for the delayed response. It seems like the error you are getting during the export process is due to the size of the output layer being different from the number of classes in the classifier. One possible solution to this issue would be to check whether the classes you are using in your classifier are matching the ones you have in your trained model.

Regarding the issue on iOS, unfortunately, I could not find enough information in your post that would help in debugging the problem. However, some possible ways to resolve this issue include checking to verify that the correct model file is being used in the iOS app, ensuring that the input size of the model matches the input image's size, and verifying the number of output layers and their sizes.

I hope this helps. If you have any further questions, please do not hesitate to ask.

Kaisershmarren commented 1 year ago

Hi all. Sorry for the late answer… I managed to make yolov5 to work with apple vision by using what Leon0402 did in https://github.com/dbsystel/yolov5-coreml-tools.

I suggest you, @glenn-jocher, to use the source code provided by Leon, to integrate the new layers in your export logic for core ml. That’s because the object detection model in CoreML is useless if not compliant with vision requirements.

bye

glenn-jocher commented 1 year ago

@Kaisershmarren hello,

Thank you for letting us know about your success in using the source code provided by Leon0402 to make YOLOv5 work with Apple Vision. We appreciate your suggestion regarding integrating the new layers in our export logic for Core ML to ensure compliance with Vision requirements.

We are always open to suggestions and contributions from the community, and we will definitely look into it.

Have a great day!