Open huangxf14 opened 6 years ago
And I meet the same problem when I run the coreML model just on simulator. To figure out whether the 'group' in convolution matters , I detele all the group in the caffe prototxt and fill the caffemodel with 0. It shows that the model without group will give the same result.
This happens often enough that I wrote a blog post about it: http://machinethink.net/blog/help-core-ml-gives-wrong-output/
Thanks for your reply. I have fixed the bug about input. But I still find a little difference between the result I get from coreML and caffe. The two images below are the result I get from caffe and coreML. I use the model to segment human from the image. As you can see, there is a little difference. Is it because the different implementation between caffe and coreML?
This might be caused by the fact that the GPU in iOS devices using 16-bit floating point instead of the 32-bit floating point used by Caffe. I would try running the Core ML model in CPU mode (see the MLPredictionOptions) and comparing the output then.
I have try to run the model on linux server, simulator, iphone X. And I get three different results. Will it run in CPU mode in the simulator? (The two images I show are from linux server and simulator)
Yes, simulator uses the CPU not the GPU.
Hello! I have used the caffe model you provided to get a semantic segmentation(two classes) model (just change to model into a FCN model). Then I convert the caffe model to coreML model. I can get reasonable result on iphone X but the result is not so good that is different from the result I get from caffe model on Ubuntu. Have you met problems like this? Is there any difference between caffe and coreML that matters? I am confused with it and thanks for you reply.