apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.44k stars 641 forks source link

got different output when running coremltool and xcode given same input image #854

Open Guanbin-Huang opened 4 years ago

Guanbin-Huang commented 4 years ago

❓Question

Basic information Please provide a descriptive title for your feedback: mlmodel run in xcode doesn't output the same results as the model run in coremltools Which area are you seeing an issue with? Xcode What type of feedback are you reporting? Incorrect/Unexpected Behavior Details What version of Xcode are you using? xcode12 Description Please describe the issue: I convert a pytorch model to coreml model, which only has an error less than 10^(-7). But after the conversion, when I deploy the model into IOS, given the same input image with the correct size and the same model file, the results output from coreml vision framework in xcode is totally different from the one from coremltools run in python.

Note that, here I use resized image, so there is no need to resize the image again. We can rule out the possibility of the difference of the resizing method. Please list the steps you took to reproduce the issue:

  1. you might need to set up the same environments as I have by following the requirements.txt
  2. download the files offered. The zip contains two folders containing pythonReid and swiftReid.
  3. open the text.py in pythonReid folder. You will see the results named as python_output.
  4. open the aaaa.xcodeproj. Then run it. You will see the results named as swift output.
  5. compare them in the first few items. You will find their error is too big(around 10^(-4)). You can copy and paste these numbers to txt. Then replace all string(like , ; [] ()) with space and load them into numpy, then compare them with the following code:

abs_err = np.abs(python_out - swift_out) mask = abs_err>0.0001 print(mask.sum()/mask.size) print("abs_err.max :" ,abs_err.max())

you will find that over 50% of errors are bigger than 10^(-4). What did you expect to happen? I expect the output error between xcode and coremltools should be around 10^(-7) or less than that. What actually happened? Over 50% of errors are bigger than 10^(-4). However, even the errors in python coremltools

Note that even I used the model released by apple officially. The strange thing still happened. I guess coreml vision framework should take this blame.

conversion error: if you want to check the conversion error, plz go to pythonReid/shit/ and run the Aligned_reid.py

The reason why I care so much about the error is because it is very important to the downstream part of my task. And since the weights are too small, 10^(-4) is unbearable, which is even way bigger than the conversion error 10^(-7).

System Information

Guanbin-Huang commented 4 years ago

the relevant files can be downloaded here : https://feedbackassistant.apple.com/feedback/8276169

TobyRoseman commented 2 years ago

Is this still an issue with the latest version of coremltools and Xcode?