akirasosa / mobile-semantic-segmentation

Real-Time Semantic Segmentation in Mobile device
MIT License
715 stars 135 forks source link

example app #6

Closed Zumbalamambo closed 6 years ago

Zumbalamambo commented 7 years ago

can you please provide an example app

akirasosa commented 7 years ago

@Zumbalamambo About iOS, I have only one small problem. I use mean standardization, but coremltools has only redBias, greenBias, blueBias and imageScale for preprocessing. https://github.com/apple/coremltools/issues/64#issue-274065984

So, the converting script will be like this.

# It's not strictly correct. I want to scale each channels respectively. 
        coreml_model = coremltools.converters.keras.convert(model,
                                                            input_names='image',
                                                            image_input_names='image',
                                                            red_bias=29.24429131 / 64.881128947,
                                                            gray_bias=29.24429131 / 64.881128947,
                                                            blue_bias=29.24429131 / 64.881128947,
                                                            image_scale=1. / 64.881128947)

But this problem does not make so much effect to the accuracy. Once you have done it, you just use Vision and CoreML in iOS. You can find an example using Vision and CoreML at https://github.com/stringcode86/MLCameraDemo , which is for classification problem.

It's very easy to change the example for segmentation problem. You get predicted result and convert it to image. It uses https://github.com/hollance/CoreMLHelpers also.

    func handlePrediction(request: VNRequest, error: Error?) {
        guard let observations = request.results as? [VNCoreMLFeatureValueObservation] else {
            fatalError("unexpected result type from VNCoreMLRequest")
        }
        let multiArray = observations[0].featureValue.multiArrayValue!

        DispatchQueue.main.async { [weak self] in
            self?.predictionView?.image = MultiArray<Double>(multiArray).image(channel: 0, offset: 0, scale: 255)
        }
    }
akirasosa commented 7 years ago

@Zumbalamambo My friend has implemented it. Take a look. https://github.com/vfa-tranhv/MobileAILab-HairColor-iOS