ultralytics / yolo-flutter-app

A Flutter plugin for Ultralytics YOLO computer vision models
https://ultralytics.com
GNU Affero General Public License v3.0
111 stars 43 forks source link

Thread 4: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.mlassetio Code=1 "Failed to parse the model specification. Error: Field number 3 has wireType 4, which is not supported." UserInfo={NSLocalizedDescription=Failed to parse the model specification. Error: Field number 3 has wireType 4, which is not supported.} #6

Closed stm233 closed 6 months ago

stm233 commented 7 months ago

When I initial the Yolo model on real ios device.

I get the error: Thread 4: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.mlassetio Code=1 "Failed to parse the model specification. Error: Field number 3 has wireType 4, which is not supported." UserInfo={NSLocalizedDescription=Failed to parse the model specification. Error: Field number 3 has wireType 4, which is not supported.}

it looks like the model is suitable for IOS MLcore. But I have tried tfiles in the example, still get this error. My ios's version is 17.1.2

Is there any way to solve this? Thanks

pderrenger commented 7 months ago

Hey there! 👋 It sounds like you're encountering an issue with model compatibility on iOS. This error often occurs when there's a mismatch between the model format expected by Core ML and the format of the YOLO model you're using. Here are a couple of steps that might help:

  1. Ensure Model Compatibility: Make sure your YOLO model is correctly converted to Core ML format. Tools like Core ML Tools can be used for conversion, but pay close attention to the version compatibility.
  2. Update iOS and Tools: Since you're on iOS 17.1.2, ensure that all your tools (Xcode, Core ML Tools, etc.) are updated to their latest versions to avoid compatibility issues.
  3. Check the Model's Input and Output: Verify that the model's input and output specifications match what Core ML expects. Sometimes, minor discrepancies can cause these errors.

If the issue persists, I'd recommend checking the documentation for any updates or known issues related to iOS model deployment at https://docs.ultralytics.com. Our docs are continually updated with troubleshooting tips and might have just what you need.

Feel free to share more details if you're still stuck, and the community or our team might be able to offer more targeted advice. Thanks for reaching out! 😊

stm233 commented 7 months ago

Thanks for your reply.

I was mistaken in using the tflite file, which is for Android. after I tried the mlmodel following your readme.me. I did not meet this problem.

But I faced a new issue. The bonding boxes the coordinates are wrong, most time the bonding boxes are slighting 20-30 pixel offset the right one. I try the same model on android, the coordinates are right. But on IOS, it failed. Do you have any idea about this problem?

the number and class of bonding boxes are right, just boxes' cooridnates. I also changed the different the imgsize, but still did not work. I tried 320 x 320, 192 x 320, 320 x 192, 640 x 640.

stm233 commented 7 months ago

for i in 0..<100 { if i < results.count && i < self.numItemsThreshold { let prediction = results[i]

                    var rect = prediction.boundingBox  // normalized xywh, origin lower left
                    print("rect: \(rect)")

//                     if screenRatio >= 1 { // iPhone ratio = 1.218
//                         let offset = (1 - screenRatio) * (0.5 - rect.minX)
//                         let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: offset, y: -1)
//                         rect = rect.applying(transform)
// //                        rect.size.width *= screenRatio
//                     } else { // iPad ratio = 0.75
//                         let offset = (screenRatio - 1) * (0.5 - rect.maxY)
//                         let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: offset - 1)
//                         rect = rect.applying(transform)
//                         rect.size.height /= screenRatio
//                     }

                    rect = VNImageRectForNormalizedRect(rect, Int(screenWidth), Int(newHeight))
                    print("rect: \(rect)")

                    // The labels array is a list of VNClassificationObservation objects,
                    // with the highest scoring class first in the list.
                    let label = prediction.labels[0].identifier
                    let index = self.labels.firstIndex(of: label) ?? 0
                    let confidence = prediction.labels[0].confidence

     I remove code in IOS folder. I think these code not suitable for the person just wanna show the result by their own visual ways
pderrenger commented 7 months ago

Hey there! 👋 It seems like you're facing an issue with bounding box coordinates being offset on iOS devices. This can be tricky since the offset might be due to how the coordinates are translated or scaled to fit the screen dimensions.

For your issue, since the bounding boxes display correctly on Android but are offset on iOS, it likely relates to how the bounding box coordinates are adjusted for the display size. The commented-out code you showed seems to be attempting to adjust the box positions based on the device's aspect ratio, which is a step in the right direction.

Considering you're already normalizing the bounding box coordinates (VNImageRectForNormalizedRect(rect, Int(screenWidth), Int(newHeight))), ensure the screenWidth and newHeight values accurately reflect the display area where the bounding boxes will be rendered.

One quick suggestion is to double-check the transformation and scaling logic. For an iOS device, if you're dealing with normalized coordinates (where the top-left is (0,0) and bottom-right is (1,1)), and your image is displayed in a view with actual dimensions different from the model input size, you'll indeed need to scale and possibly translate these normalized coordinates to your view space.

Given your current setup, you might not need the commented transformations if VNImageRectForNormalizedRect does the job well. However, ensure that the aspect ratio of the input image is maintained when displayed on the screen. Any discrepancy in aspect ratios between the model's input image size and the displayed image size can cause such offsets.

A common pitfall is forgetting that coordinate systems between different frameworks (e.g., UIKit vs. Vision) might have origins at different corners (top-left vs. bottom-left), leading to inverted Y coordinates trouble.

If you're still facing issues, a more detailed look at how you're handling the image display and bounding box rendering in relation to the actual image size and screen dimensions might be needed. Keep experimenting with different sizes, and pay close attention to the aspect ratios and coordinate systems in use.

Hope this nudges you in the right direction! Keep us posted on your progress 😊.

sergiossm commented 7 months ago

@stm233 can you try exporting your model using yolo export format=mlmodel model=path/to/best.pt imgsz=[320,192] half nms?

Edit: You can also make sure the exported model works using Xcode.

Screenshot 2024-04-11 at 17 32 58
github-actions[bot] commented 6 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐