YuvalNirkin / face_segmentation

Deep face segmentation in extremely hard conditions
Apache License 2.0
726 stars 152 forks source link

Caffe Model to CoreML Model #21

Open ghost opened 4 years ago

ghost commented 4 years ago

Greetings,

This may be off topic, but I'm just trying to find some help.

I'm trying to get 300 FCN model to run in my xCode project. I'm converting .coffemodel to .mlmodel with coremltools:

coreml_model = coremltools.converters.caffe.convert(caffe_model, image_input_names='data', is_bgr = True, red_bias = -104, blue_bias = -123, green_bias = -117, image_scale = 1)

As far as I understand the input image is in BGR color space, with above mentioned biases. After conversions when I read the model description with coremltools:

input { name: "data" type { imageType { width: 300 height: 300 colorSpace: BGR } } } output { name: "score" type { multiArrayType { dataType: DOUBLE } } } metadata { userDefined { key: "coremltoolsVersion" value: "3.3" } }

The output has no shapes.

When I add the model to Xcode project, I run the model by passing CVPixelBuffer as input

let input = buffer(from: userSelectedImage_UI) guard let prediction = try? model.prediction(data: input!) else { return }

the output of the model is MultyArray.

let output = prediction.score

How can I convert it to CVPixelBuffer, if there're no shapes.

I've tried using MultiArray converters to no avail, the output is just black image

I've tried this and this methods.

If anybody knows how to get this working in CoreML I'd really appreciate it

lekhanhtoan37 commented 3 years ago

Dear @malemo, Did you find how to implement to CoreML yet? Hope to hear you soon.