apple / turicreate

Turi Create simplifies the development of custom machine learning models.
BSD 3-Clause "New" or "Revised" License
11.2k stars 1.14k forks source link

Confidence level difference between .mlmodel and .model #2093

Open dhgokul opened 5 years ago

dhgokul commented 5 years ago

@gustavla @TobyRoseman

I have trained and created .model and .mlmodel file

Predict sample image using .model file

import turicreate as tc

load image

test = tc.SFrame({'image': [tc.Image('test.jpg')]})

load the model

model = tc.load_model('mymodel.model') test['predictions'] = model.predict(test) print(test['predictions'])

Result : [[{'confidence': 0.5855070428502438, 'type': 'rectangle', 'coordinates': {'y': 467.0532762422659, 'x': 722.4311472469142, 'width': 280.8037508451022, 'height': 486.56053249652575}, 'label': 'test'}]]

Import coreml model in iOS app and pass the frame, i am getting confidence level as 0.84

Why different confidence score between .model and .mlmodel

Any help appreciated !!!

TobyRoseman commented 5 years ago

Take a look at the docstrings for export_coreml(...). This might be relevant:

The instances are not sorted by confidence, so the first one will generally not have the highest confidence (unlike in predict). Also unlike the predict function, the instances have not undergone what is called non-maximum suppression, which means there could be several instances close in location and size that have all discovered the same object instance.

dhgokul commented 5 years ago

@TobyRoseman Thanks for your response !

When export coreML model using include_non_maximum_suppression=False option

model.export_coreml('/turicreate/Turicreate' + modelname + '.mlmodel',include_non_maximum_suppression=False)

Is there any possible ways to match both .model and .mlmodel confidence level to check accuracy ?

znation commented 5 years ago

The defaults should probably be the same both for predict and export_coreml such that the predictions come out the same using defaults in Turi Create and using the exported model in CoreML. Tagging as a bug to make sure this is addressed.

nickjong commented 5 years ago

The default confidence levels for predict and export_coreml are the same: 0.25

Hmm, you should be able to use CoreMLTools to load the exported model and perform inference on it, in a controlled Python environment on your Mac, before trying to deploy to iOS. If you sort both the Turi output and the CoreML output by confidence, it would be interesting to see the results.

Note that in general, the results might not be precisely the same, since different NN inference engines are being used in each code path.

dhgokul commented 5 years ago

@nickjong Thanks for your response !

Predict same image using .model and .mlmodel file model -> confidence: 0.58 .mlmodel ->confidence 0.84

Confidence level are different ? Please explain why confidence score is different for both models

dhgokul commented 5 years ago

@nickjong @znation @TobyRoseman any updates ?

nickjong commented 5 years ago

Hmm, some difference is to be expected depending on how the image is resized and otherwise converted into the raw NN input, but I am surprised that the difference is this high. Would you be able to share the saved .model and the image you're using for testing? (If not, we can try to repro on our end, but it might be helpful to ensure we're looking at the same things.)