apple / turicreate

Turi Create simplifies the development of custom machine learning models.
BSD 3-Clause "New" or "Revised" License
11.2k stars 1.14k forks source link

Different behavior between Model and Core ML model #1016

Closed cianiandreadev closed 5 years ago

cianiandreadev commented 6 years ago

I successfully trained a Object Detection model (default TC YOLO) with TC b3 and exported in CoreML format. My model has 8000 iterations and 0.8 final loss.

I then validate it with some images using TC and bounding box drawing util and it recognizes them better that I expected!

I then downloaded the sample project for recognizing objects in live capture presented by @znation during the WWDC and replaced the model in the project with my new model.

What it's weird is that the object are no longer recognized. Is NOT a problem of VNDetectedObjectObservation because they are correctly returned, but it seems that the classing and the bounding box does not represent the detected object correctly (different class and wrong bounding box). I used iOS 12 beta 9 and Xcode 10 beta 6 as developing environment, with an iPad Pro 2017 (or 2016, I don't remember).

From my first test seems this could be a rotation issue, but I don't know if that is the real issue and eventually how to fix it.

Does anybody faced a similar issue or can eventually help me with that?

kid9591 commented 5 years ago

We’re seeing this problem too. Core ML model performance improves when you turn the phone sideways to test object detection model. Any known fixed yet? I have the same problem as you. iPhone running CoreML model in portrait mode cannot detect the object while in landscape mode, it could. But when move the phone horizontally, the offset between bounding box and object become larger. Could someone explain to me this problem, please? @philimanjaro @gustavla @yousifKashef

IMG_3683 IMG_3684 IMG_3685