Closed zhaoqier closed 2 years ago
Hi @zhaoqier ,
In my sample code, we assign the output of the yolo model to . predictedObjects
of DrawingBoundingBoxView
instance.
When the .predictedObjects
property is assigned from outside, above code will be called and then go through several methods, createLabelAndBox
method will be called. In the createLabelAndBox
method, I implemented the parsing and transforming the Vision framework coordinate system to UIKit coordinate system (following lines).
As far as I know, you need to understand following different coordinate systems:
(0,0)
, range of x,y are 0.0-1.0(0,0)
, range of x,y are based on point
of the screen (in iPhone14Pro Max case, (0,0)-(430, 932): check here)I might be wrong, but please double check yourself for detail.
If you have any question, feel free ask :) (I might not be able to reply immediately, but 😄)
Hi @tucan9389: Thanks for your reply, that is helpful to me especially on the understanding side of your codes. I think I solved my problem right now, the main reason that cause incorrect coordinates is I add a mask view in my own application and a wrong calculation was made by myself.
Thanks for your support again!
Regards, Kier
Hi @tucan9389 :
Thanks for your work firstly!
I have successfully converted the yolov5s model into an .mlmodel and deployed it using your code. My model has a good effect on static images, but no result or inaccurate position is found in real-time detection. May I ask what is the possible reason? Do I need to make some modifications in the iOS app code? The interested thing is, my model also performed well with video testing locally, so I was confused about what went wrong.
Thanks for your help!