Ma-Dan / EdgeConnect-CoreML

EdgeConnect for iOS implemented using CoreML.
Other
42 stars 8 forks source link

How to generate mask? #3

Open albingit70 opened 4 years ago

albingit70 commented 4 years ago

Hello @Ma-Dan Thank you for your work. I am trying to run your application on iOS device, but it is not generating mask images like your sample result or EdgeConnect network. I couldn't draw any masks on the image. Would you help me if what I am missing? Thank you

albingit70 commented 4 years ago

Also I have more questions for you. Is your touch screen editing version done? I just observed you've been generating mask image manually in the code. If we use touch control to generate mask image, then I need to generate mask image which used in edge-connect repository? (black-white mask) Also we don't need Canny Edge detection algorithm in this case? Looking forward to hearing from you. Thank you

Ma-Dan commented 4 years ago

Touch screen mask editing has not been done yet, if you want to get mask from user touch, then you should convert mask to black-white format. Canny Edge detection in this code is no used, because its result is not exactly same with scikit-image, and input with zero edge data generates better result than using this Canny code. If you want to add edge detection, I suggest porting it from scikit-image.

albingit70 commented 4 years ago

Hi Ma Dan, Thank you for your reply. There are my generated mask image and input image: Input input Mask mask

But I couldn't get smooth result like python repository or edge-connect. Result result

I think I have made some wrong when I combine mask in process function. There are my process function:

func process(input: UIImage, mask: UIImage, completion: @escaping FilteringCompletion) {

        let startTime = CFAbsoluteTimeGetCurrent()

        // Initialize the EdgeConnect model

        let model_edge = edge()

        let model_inpainting = inpainting()

        let height = 320

        let width = 320

        // Next steps are pretty heavy, better process them on another thread

        DispatchQueue.global().async {

            // 1 - Resize our input image

            guard let inputImage = input.resize(to: CGSize(width: width, height: height)) else {

                completion(nil, EdgeConnectError.resizeError)

                return

            }

            guard let inputMask = mask.resize(to: CGSize(width: width, height: height)) else {

                completion(nil, EdgeConnectError.resizeError)

                return

            }

            // 2 - Edge Model

            guard let cvBufferInput = inputImage.pixelBuffer() else {

                completion(nil, EdgeConnectError.pixelBufferError)

                return

            }

            guard let cvMaskBufferInput = inputMask.pixelBuffer() else {

                completion(nil, EdgeConnectError.pixelBufferError)

                return

            }

            guard let mlGray = try? MLMultiArray(shape: [1, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            guard let mlMaskGray = try? MLMultiArray(shape: [1, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            guard let mlEdge = try? MLMultiArray(shape: [1, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            guard let mlMaskEdge = try? MLMultiArray(shape: [1, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            self.getGrayImage(pixelBuffer: cvBufferInput, data: mlGray, height: height, width: width)

            self.getMask(pixelBuffer: cvMaskBufferInput, data: mlMaskGray, height: height, width: width)

            //let image = mlGray.image(min: 0, max: 1, axes: (0, 1, 2))

            let edgeInputImage = self.writeEdgeInputArray(input: cvBufferInput, height: height, width: width)

            var edgeImage = [UInt8](repeating:0, count:height * width)

            guard let mlInputEdge = try? MLMultiArray(shape: [3, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            self.prepareEdgeInput(gray: mlGray, edge: mlEdge, mask: mlMaskGray, input: mlInputEdge, height: height, width: width)

            guard let inputEdge = try? edgeInput(input_1: mlInputEdge) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            guard let edgeOutput = try? model_edge.prediction(input: inputEdge) else {

                completion(nil, EdgeConnectError.predictionError)

                return

            }

            //let image = edgeOutput._153.image(min: 0, max: 1, axes: (0, 1, 2))

            // 3 - InPainting model

            guard let mlInputInpainting = try? MLMultiArray(shape: [4, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            self.prepareInpaintingInput(image: cvBufferInput, mask: mlMaskGray, edge: edgeOutput._153, input: mlInputInpainting, height: height, width: width)

            guard let inputInpainting = try? inpaintingInput(input_1: mlInputInpainting) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            guard let inpaintingOutput = try? model_inpainting.prediction(input: inputInpainting) else {

                completion(nil, EdgeConnectError.predictionError)

                return

            }

            //let image = inpaintingOutput._173.image(min: 0, max: 1, axes: (0, 1, 2))

            guard let mlOutput = try? MLMultiArray(shape: [3, NSNumber(value: width), NSNumber(value: height)], dataType: MLMultiArrayDataType.float32) else {

                completion(nil, EdgeConnectError.allocError)

                return

            }

            self.mergeOutputImage(image: cvBufferInput, inpainting: inpaintingOutput._173, mask: mlMaskGray, output: mlOutput, height: height, width: width)

            let image = mlOutput.image(min: 0, max: 1, axes: (0, 1, 2))

            // 4 - Hand result to main thread

            DispatchQueue.main.async {

                let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime

                print("Time elapsed for EdgeConnect process: \(timeElapsed) s.")

                completion(image, nil)

            }

        }

    }

As a new issue, the result is keeping white points around mask edge. Please let me know what I need to fix in the code. Looking forward to hearing from you. Thank you.

albingit70 commented 4 years ago

Hi @Ma-Dan Any help for this issue? Thank you

Ma-Dan commented 4 years ago

image Please try this input, input image should be 320x320, and mask region are all white pixels without antialised edge.

rawmean commented 2 years ago

Used the image that you suggested and get this result. Seems worse than the demo images. Is this what's expected? image

gneil90 commented 2 years ago

Can you answer please, I am receiving the same..