Closed vintoniuk closed 2 months ago
Hi, again! I found a solution!!!
You need to specify compute_units=ct.ComputeUnit.CPU_ONLY
for inference, like this:
coreml_model = ct.models.MLModel(
model=str(cfg.converted_model_path),
compute_units=ct.ComputeUnit.CPU_ONLY
)
And the new statistics is wonderful:
Max absolute difference: 3.2037497e-07
Min absolute difference: 0.0
Mean absolute difference: 5.7334546e-08
🐞Describing the bug
An interesting situation. I have a PyTorch model for image classification, though the details does not matter. I need to make it work on mobile devices so I decided to convert it into TFLite and CoreML formats. The problem is that predictions of the original PyTorch model and the converted TFLite model are basically identical, BUT the mismatch in predictions between the original model and the converted CoreML model is on the order of
1e-2
, which is a substantial error.It would be hard for me to give you a complete reproducible example, but I will give you the main pieces, which should be enough.
To Reproduce
The model is the
timm
model. I use a finetuned version, but you can take a pretrained one. Architecture is identical.This is the conversion function:
And here is the validation code:
The TFLite case is analogous. So here is the statistics for TFLite:
Max absolute difference:
6.556511e-07
Min absolute difference:0.0
Mean absolute difference:3.983344e-08
And here is for CoreML:
Max absolute difference:
0.012094557
Min absolute difference:8.384697e-06
Mean absolute difference:0.0028901247
Please, help me. Thank you!
Just in case, here is TFLite conversion (I use
ai_edge_torch
package):System environment:
8.0
but for7.0
is the same.