kerct / mlkit-ocv

android app with face detection using firebase ML kit, face recognition using javacv
3 stars 2 forks source link

Applying Imgproc to detected face #1

Closed Cortlandd closed 5 years ago

Cortlandd commented 5 years ago

Hello, I saw your repo and found you've made progress that I'm trying to get to. I've implemented the base mlkit contour detector. It draws to canvas, draws on contours, etc. Its at default. I want to apply ImgProc.cvtColor to the detected face inside the rectangle. But I'm not sure how to approach? Any tips?

kerct commented 5 years ago

ML kit provides a list of the detected faces List< FirebaseVisionFace > as well as the original camera image in the form of a Bitmap. We can crop out the detected faces from the original bitmap using the coordinates of the bounding box of each FirebaseVisionFace to create a new bitmap, then convert this bitmap to Mat so that you can apply ImgProc.cvtColor

I have actually implemented this already in my code, feel free to check it out! 😃

Cortlandd commented 5 years ago

Thanks for the response. And yes I saw your implementation but why does the camera quality look so different? It looks very "pixelated" so to speak. I tested another of your examples and they do the same. But I did the sample implementation and it shows the cameras base quality?

Also, personrecognizer is learning about the face but it isnt drawing anything or converting the color in the live preview. I'm trying to modify the face in the live camera. Or do we HAVE to use opencv's camera to do any processing using opencv?

kerct commented 5 years ago

What was the sample implementation you did? I'm following ML kit's implementation for the camera, and when I tried their sample code, their camera was also pixelated, probably to speed up the process since the face detection is done in real-time.

In my implementation, I'm trying to do face recognition in PersonRecogniser, so nothing is drawn in the live preview, except that the name of the predicted person is printed out. If you want to use ML kit for detection and OpenCV for processing, there is no need to use OpenCV's camera since you can just pass the detected face from ML kit's camera to OpenCV.

Cortlandd commented 5 years ago

How do I pass the detected face? I tried to with no luck it just returns null. Where do I pass the Imgproc functionality to the detected face? I'm assuming in the facegraphic overlay because if so I tried that with no luck?

Cortlandd commented 5 years ago

And I ran the quick start android for mlkit

https://github.com/firebase/quickstart-android/tree/master/mlkit

kerct commented 5 years ago

How do I pass the detected face? I tried to with no luck. Where do I pass the Imgproc functionality? I'm assuming in the facegraphic overlay but I tried that with no luck?

Did you manage to get the cropped bitmap of the detected face from ML kit? What's the error? Did you import OpenCV? What are you trying to achieve? Do you need to use OpenCV for it?

And I ran the quick start android for mlkit

https://github.com/firebase/quickstart-android/tree/master/mlkit

That's weird I tried this and got the pixelated camera...

Cortlandd commented 5 years ago

Did you manage to get the cropped bitmap of the detected face from ML kit?

override fun onSuccess(originalCameraImage: Bitmap?, faces: List<FirebaseVisionFace>, frameMetadata: FrameMetadata, graphicOverlay: GraphicOverlay) {
        graphicOverlay.clear()

        originalCameraImage?.let {
            val imageGraphic = CameraImageGraphic(graphicOverlay, it)
            graphicOverlay.add(imageGraphic)

        }

        faces.forEach {
            val faceGraphic = FaceContourGraphic(graphicOverlay, it)
            graphicOverlay.add(faceGraphic)

        }
        graphicOverlay.postInvalidate()
    }

I tried using imgproc on original image wiht no luck. Or is that not the live face?

kerct commented 5 years ago

Yes this is the function where the live faces detected are returned. But where is the part that ImgProc is used?

Cortlandd commented 5 years ago

Apologies I removed it. But this is what I'm trying to implement. Its from the sample opencv.

public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
        final int viewMode = mViewMode;
        switch (viewMode) {
            case VIEW_MODE_CANNY:
                // input frame has gray scale format
                mRgba = inputFrame.rgba();
                Imgproc.Canny(inputFrame.gray(), mIntermediateMat, 80, 100);
                Imgproc.cvtColor(mIntermediateMat, mRgba, Imgproc.COLOR_GRAY2RGBA, 4);
                break;
        }
        return mRgba;
}

My issue is I'm not sure where this code should live in my mlkit+opencv code.

kerct commented 5 years ago

You can convert the Bitmap (originalCameraImage) into a Mat, then use Imgproc on it.

kerct commented 5 years ago

I had 1 class for the ML kit side (FaceDetectionProcessor), 1 for the OpenCV side (PersonRecogniser) and 1 class which links the 2 (Recognise). It's up to you where you want to put the code but I feel that this way is much clearer.

Cortlandd commented 5 years ago

You can convert the Bitmap (originalCameraImage) into a Mat, then use Imgproc on it. See I tried that but no luck. Opencv has utils for converting a bitmap to Mat. Could it be because of

Sorry for the kotlin and java mix

CameraImageGraphic.java

import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Rect;

/** Draw camera image to background. */
public class CameraImageGraphic extends GraphicOverlay.Graphic {

    private final Bitmap bitmap;

    public CameraImageGraphic(GraphicOverlay overlay, Bitmap bitmap) {
        super(overlay);
        this.bitmap = bitmap;
    }

    @Override
    public void draw(Canvas canvas) {
        canvas.drawBitmap(bitmap, null, new Rect(0, 0, canvas.getWidth(), canvas.getHeight()), null);
    }
}

FaceContourGraphic.kt

/** Graphic instance for rendering face contours graphic overlay view.  */
class FaceContourGraphic(overlay: GraphicOverlay, private val firebaseVisionFace: FirebaseVisionFace?): GraphicOverlay.Graphic(overlay) {

    private val facePositionPaint: Paint
    private val redfacePositionPaint: Paint
    private val idPaint: Paint
    private val boxPaint: Paint

    init {
        val selectedColor = Color.WHITE

        facePositionPaint = Paint()
        facePositionPaint.color = selectedColor

        redfacePositionPaint = Paint()
        redfacePositionPaint.color = Color.RED

        idPaint = Paint()
        idPaint.color = selectedColor
        idPaint.textSize = ID_TEXT_SIZE

        boxPaint = Paint()
        boxPaint.color = selectedColor
        boxPaint.style = Paint.Style.STROKE
        boxPaint.strokeWidth = BOX_STROKE_WIDTH
    }

    /** Draws the face annotations for position on the supplied canvas.  */
    override fun draw(canvas: Canvas) {
        val face = firebaseVisionFace ?: return

        // Draws a circle at the position of the detected face, with the face's track id below.
        val x = translateX(face.boundingBox.centerX().toFloat())
        val y = translateY(face.boundingBox.centerY().toFloat())
        canvas.drawCircle(x, y, FACE_POSITION_RADIUS, facePositionPaint)
        canvas.drawText("id: ${face.trackingId}", x + ID_X_OFFSET, y + ID_Y_OFFSET, idPaint)

        // Draws a bounding box around the face.
        val xOffset = scaleX(face.boundingBox.width() / 2.0f)
        val yOffset = scaleY(face.boundingBox.height() / 2.0f)
        val left = x - xOffset
        val top = y - yOffset
        val right = x + xOffset
        val bottom = y + yOffset
        canvas.drawRect(left, top, right, bottom, boxPaint)

        val contour = face.getContour(FirebaseVisionFaceContour.ALL_POINTS)
        for (point in contour.points) {
            val px = translateX(point.x)
            val py = translateY(point.y)
            canvas.drawCircle(px, py, FACE_POSITION_RADIUS, redfacePositionPaint)
        }

        if (face.smilingProbability >= 0) {
            canvas.drawText(
                    "happiness: ${String.format("%.2f", face.smilingProbability)}",
                    x + ID_X_OFFSET * 3,
                    y - ID_Y_OFFSET,
                    idPaint)
        }

        if (face.rightEyeOpenProbability >= 0) {
            canvas.drawText(
                    "right eye: ${String.format("%.2f", face.rightEyeOpenProbability)}",
                    x - ID_X_OFFSET,
                    y,
                    idPaint)
        }
        if (face.leftEyeOpenProbability >= 0) {
            canvas.drawText(
                    "left eye: ${String.format("%.2f", face.leftEyeOpenProbability)}",
                    x + ID_X_OFFSET * 6,
                    y,
                    idPaint)
        }
        val leftEye = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EYE)
        leftEye?.position?.let {
            canvas.drawCircle(
                    translateX(it.x),
                    translateY(it.y),
                    FACE_POSITION_RADIUS,
                    facePositionPaint)
        }
        val rightEye = face.getLandmark(FirebaseVisionFaceLandmark.RIGHT_EYE)
        rightEye?.position?.let {
            canvas.drawCircle(
                    translateX(it.x),
                    translateY(it.y),
                    FACE_POSITION_RADIUS,
                    facePositionPaint)
        }
        val leftCheek = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_CHEEK)
        leftCheek?.position?.let {
            canvas.drawCircle(
                    translateX(it.x),
                    translateY(it.y),
                    FACE_POSITION_RADIUS,
                    facePositionPaint)
        }

        val rightCheek = face.getLandmark(FirebaseVisionFaceLandmark.RIGHT_CHEEK)
        rightCheek?.position?.let {
            canvas.drawCircle(
                    translateX(it.x),
                    translateY(it.y),
                    FACE_POSITION_RADIUS,
                    facePositionPaint)
        }
    }

    companion object {

        private const val FACE_POSITION_RADIUS = 4.0f
        private const val ID_TEXT_SIZE = 30.0f
        private const val ID_Y_OFFSET = 80.0f
        private const val ID_X_OFFSET = -70.0f
        private const val BOX_STROKE_WIDTH = 5.0f
    }
}
kerct commented 5 years ago

Opencv has utils for converting a bitmap to Mat

What is the error when using this?

Cortlandd commented 5 years ago

Tried to simply convert the detected face to grayscale. No luck at all, its just the same. No error

override fun onSuccess(originalCameraImage: Bitmap?, results: List<FirebaseVisionFace>, frameMetadata: FrameMetadata, graphicOverlay: GraphicOverlay) {
        graphicOverlay.clear()

        originalCameraImage?.let {

            var rgba = Mat()
            Utils.bitmapToMat(originalCameraImage, rgba)
            Imgproc.cvtColor(rgba, rgba, Imgproc.COLOR_RGBA2GRAY)

            val imageGraphic = CameraImageGraphic(graphicOverlay, it)
            graphicOverlay.add(imageGraphic)

        }

        results.forEach {
            val faceGraphic = FaceContourGraphic(graphicOverlay, it)
            graphicOverlay.add(faceGraphic)

        }
        graphicOverlay.postInvalidate()
    }
kerct commented 5 years ago

Yes you have converted the matrix rgba to greyscale but you are not doing anything with it; the image graphic that you are drawing is still using the original coloured bitmap. Try converting rgba back to bitmap and pass it into the CameraImageGraphic constructor.

Cortlandd commented 5 years ago

Try converting rgba back to bitmap and pass it into the CameraImageGraphic constructor. That worked! Instead of just the face to greyscale it did the whole image but I'm assuming thats because original image is the whole live preview?

kerct commented 5 years ago

Yup, that's because the Bitmap that you worked on is the originalCameraImage. If you want to work on the face only you can crop out that portion from the original Bitmap.

Cortlandd commented 5 years ago

If you want to work on the face only you can crop out that portion from the original Bitmap.

How if all I have is FirebaseVisionFace? And in doing so, will it still be in the live preview?

kerct commented 5 years ago

You can call getBoundingBox() on the FirebaseVisionFace. From there, you will have the coordinates of the Rect of the detected face. You can do it anywhere you want but doing it in the live preview should work.

Cortlandd commented 5 years ago

Lol here everytime a face is discovered the screen goes black. I show a face, back to normal. Where am I wrong here lol?

override fun onSuccess(originalCameraImage: Bitmap?, results: List<FirebaseVisionFace>, frameMetadata: FrameMetadata, graphicOverlay: GraphicOverlay) {
        graphicOverlay.clear()

        originalCameraImage?.let {

//            var rgba = Mat()
//            Utils.bitmapToMat(originalCameraImage, rgba)
//            Imgproc.cvtColor(rgba, rgba, Imgproc.COLOR_RGBA2GRAY)
//            Utils.matToBitmap(rgba, originalCameraImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, it)
            graphicOverlay.add(imageGraphic)

        }

        results.forEach {

            var rgba = Mat()

            var newImage = Bitmap.createBitmap(it.boundingBox.width(), it.boundingBox.height(), Bitmap.Config.RGB_565)
            Utils.bitmapToMat(newImage, rgba)
            Imgproc.cvtColor(rgba, rgba, Imgproc.COLOR_RGBA2GRAY)

            Utils.matToBitmap(rgba, newImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, newImage)
            val faceGraphic = FaceContourGraphic(graphicOverlay, it)
            graphicOverlay.add(faceGraphic)
            graphicOverlay.add(imageGraphic)

        }

        graphicOverlay.postInvalidate()
    }
kerct commented 5 years ago

Is your newImage correctly created? Seems like it might be an empty bitmap.

Cortlandd commented 5 years ago

var newImage = Bitmap.createBitmap(it.boundingBox.width(), it.boundingBox.height(), Bitmap.Config.RGB_565)

            if (newImage == null) {
                println("Empty bitmap")
            } else {
                println("Not empty bitmap")
            }

Its returning not empty. So I'm assuming when I create it above, I'm creating just a black image with a width and height?

This link basically tells me to do the same but no luck: https://stackoverflow.com/questions/51335588/mlkit-firebase-android-how-to-convert-firebasevisionface-to-image-object-like

Cortlandd commented 5 years ago

I may have bit off more than I can chew. Thanks for the help but I have to find a different approach.

kerct commented 5 years ago

So I'm assuming when I create it above, I'm creating just a black image with a width and height?

Yup. You can use a different constructor for create bitmap which specifies the source bitmap https://developer.android.com/reference/android/graphics/Bitmap#createBitmap(android.graphics.Bitmap,%2520int,%2520int,%2520int,%2520int)

Cortlandd commented 5 years ago

Code

override fun onSuccess( originalCameraImage: Bitmap?, results: List<FirebaseVisionFace>, frameMetadata: FrameMetadata, graphicOverlay: GraphicOverlay) {
        graphicOverlay.clear()

        originalCameraImage?.let {

//            var rgba = Mat()
//            Utils.bitmapToMat(originalCameraImage, rgba)
//            Imgproc.cvtColor(rgba, rgba, Imgproc.COLOR_RGBA2GRAY)
//            Utils.matToBitmap(rgba, originalCameraImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, it)
            graphicOverlay.add(imageGraphic)

        }

        results.forEach {

            val faceGraphic = FaceContourGraphic(graphicOverlay, it)
            graphicOverlay.add(faceGraphic)

            var rgba = Mat()
            var rgbaInnerWindow = Mat()
            var mIntermediateMat = Mat()
            //var myMat = Mat(it.boundingBox.height(), it.boundingBox.width(), )

            var newImage = Bitmap.createBitmap(originalCameraImage)

            Utils.bitmapToMat(newImage, rgba)

            var sizeRgba = rgba.size()

            var rows = sizeRgba.height.toInt()
            var cols = sizeRgba.width.toInt()

            var left = cols / 8
            var top = rows/ 8

            var width = cols * 3 / 4
            var height = rows * 3 / 4

            rgbaInnerWindow = rgba.submat(top,top + height, left, width)
            Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80.0, 90.0)
            Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4)

            // THIS IS WHERE THE ERROR IS
            Utils.matToBitmap(rgbaInnerWindow, newImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, newImage)

            graphicOverlay.add(imageGraphic)

        }

        graphicOverlay.postInvalidate()
    }

Error

2019-05-06 20:25:08.912 7054-7054/? E/AndroidRuntime: FATAL EXCEPTION: main
    Process: com.zqc.ml.snapchat, PID: 7054
    CvException [org.opencv.core.CvException: OpenCV(3.4.3) /build/3_4_pack-android/opencv/modules/java/generator/src/cpp/utils.cpp:101: error: (-215:Assertion failed) src.dims == 2 && info.height == (uint32_t)src.rows && info.width == (uint32_t)src.cols in function 'void Java_org_opencv_android_Utils_nMatToBitmap2(JNIEnv*, jclass, jlong, jobject, jboolean)'
    ]
        at org.opencv.android.Utils.nMatToBitmap2(Native Method)
        at org.opencv.android.Utils.matToBitmap(Utils.java:123)
        at org.opencv.android.Utils.matToBitmap(Utils.java:132)
        at com.zqc.ml.snapchat.FaceContourDetectorProcessor.onSuccess(FaceContourDetectorProcessor.kt:99)
        at com.zqc.ml.snapchat.FaceContourDetectorProcessor.onSuccess(FaceContourDetectorProcessor.kt:29)
        at com.zqc.ml.snapchat.VisionProcessorBase$detectInVisionImage$1.onSuccess(VisionProcessorBase.kt:90)
        at com.google.android.gms.tasks.zzn.run(Unknown Source:4)
        at android.os.Handler.handleCallback(Handler.java:873)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:193)
        at android.app.ActivityThread.main(ActivityThread.java:6863)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:537)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)

Hmm, I'm not really doing much with the face. But based on what I have, the entire screen should've been converted to Canny.

Cortlandd commented 5 years ago

Its glitching a bit, i think its because of he threshold, and its applying to the whole screen but its working somewhat.

override fun onSuccess( originalCameraImage: Bitmap?, results: List<FirebaseVisionFace>, frameMetadata: FrameMetadata, graphicOverlay: GraphicOverlay) {
        graphicOverlay.clear()

        originalCameraImage?.let {

//            var rgba = Mat()
//            Utils.bitmapToMat(originalCameraImage, rgba)
//            Imgproc.cvtColor(rgba, rgba, Imgproc.COLOR_RGBA2GRAY)
//            Utils.matToBitmap(rgba, originalCameraImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, it)
            graphicOverlay.add(imageGraphic)

        }

        results.forEach {

            val faceGraphic = FaceContourGraphic(graphicOverlay, it)
            graphicOverlay.add(faceGraphic)

            var rgba = Mat(originalCameraImage!!.height, originalCameraImage.width, CvType.CV_8UC4)
            var mIntermediateMat = Mat(originalCameraImage.height, originalCameraImage.width, CvType.CV_8UC4)

            var gray = Mat()
            Imgproc.cvtColor(rgba, gray, Imgproc.COLOR_RGBA2GRAY)

            var newImage = Bitmap.createBitmap(originalCameraImage)

            Utils.bitmapToMat(newImage, rgba)

            Imgproc.Canny(gray, mIntermediateMat, 60.0, 80.0)
            Imgproc.cvtColor(mIntermediateMat, rgba, Imgproc.COLOR_GRAY2RGBA, 4)

            Utils.matToBitmap(rgba, newImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, newImage)

            graphicOverlay.add(imageGraphic)

        }

        graphicOverlay.postInvalidate()
    }

Whats weird is that if I change rgba and mintermediatergba to use the face.boundingbox width and height, it gives me an error. Its weird

2019-05-06 20:55:57.824 12813-12813/com.zqc.ml.snapchat E/AndroidRuntime: FATAL EXCEPTION: main
    Process: com.zqc.ml.snapchat, PID: 12813
    CvException [org.opencv.core.CvException: OpenCV(3.4.3) /build/3_4_pack-android/opencv/modules/java/generator/src/cpp/utils.cpp:101: error: (-215:Assertion failed) src.dims == 2 && info.height == (uint32_t)src.rows && info.width == (uint32_t)src.cols in function 'void Java_org_opencv_android_Utils_nMatToBitmap2(JNIEnv*, jclass, jlong, jobject, jboolean)'
    ]
        at org.opencv.android.Utils.nMatToBitmap2(Native Method)
        at org.opencv.android.Utils.matToBitmap(Utils.java:123)
        at org.opencv.android.Utils.matToBitmap(Utils.java:132)
        at com.zqc.ml.snapchat.FaceContourDetectorProcessor.onSuccess(FaceContourDetectorProcessor.kt:87)
        at com.zqc.ml.snapchat.FaceContourDetectorProcessor.onSuccess(FaceContourDetectorProcessor.kt:29)
        at com.zqc.ml.snapchat.VisionProcessorBase$detectInVisionImage$1.onSuccess(VisionProcessorBase.kt:90)
        at com.google.android.gms.tasks.zzn.run(Unknown Source:4)
        at android.os.Handler.handleCallback(Handler.java:873)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:193)
        at android.app.ActivityThread.main(ActivityThread.java:6863)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:537)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)

UPDATE Working. Position of rgba window isn't EXACTLY on the face. I need to make it bitmap actually track the face. But its on the screen and whatever is inside of it is now working.

override fun onSuccess( originalCameraImage: Bitmap?, results: List<FirebaseVisionFace>, frameMetadata: FrameMetadata, graphicOverlay: GraphicOverlay) {
        graphicOverlay.clear()

        originalCameraImage?.let {

//            var rgba = Mat()
//            Utils.bitmapToMat(originalCameraImage, rgba)
//            Imgproc.cvtColor(rgba, rgba, Imgproc.COLOR_RGBA2GRAY)
//            Utils.matToBitmap(rgba, originalCameraImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, it)
            graphicOverlay.add(imageGraphic)

        }

        results.forEach {

            var rgba = Mat(originalCameraImage!!.height, originalCameraImage.width, CvType.CV_8UC4)
            var sizeRgb = rgba.size()

            var rgbaInterWindow = Mat()
            var mIntermediateMat = Mat()

            var rows = it.boundingBox.height()
            var cols = it.boundingBox.width()

            var left = cols / 8
            var top = rows / 8

            var width  = cols * 3 / 4
            var height = cols * 3 / 4

            var newImage = Bitmap.createBitmap(originalCameraImage)

            Utils.bitmapToMat(newImage, rgba)

            rgbaInterWindow = rgba.submat(top, top + height, left, left + width)
            Imgproc.Canny(rgbaInterWindow, mIntermediateMat, 100.0, 100.0)
            Imgproc.cvtColor(mIntermediateMat, rgbaInterWindow, Imgproc.COLOR_GRAY2BGRA, 4)

            Utils.matToBitmap(rgba, newImage)

            val imageGraphic = CameraImageGraphic(graphicOverlay, newImage)
            graphicOverlay.add(imageGraphic)

            val faceGraphic = FaceContourGraphic(graphicOverlay, it)
            graphicOverlay.add(faceGraphic)

        }

        graphicOverlay.postInvalidate()
    }
Cortlandd commented 5 years ago

Somewhat working. I got the black bitmap image to track the face but the Canny part is acting like a tv that isn't working. Its doesn't have the expected canny results, but its definitely close.

/*** OPENCV  ****/

        // Rectangle the size of the face
        var rect = Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())

        var b = Bitmap.createBitmap(rect.width(), rect.height(), Bitmap.Config.ARGB_8888)

        var rgba = Mat(rect.height(), rect.width(), CvType.CV_8UC4)
        var sizeRgba = rgba.size()

        //Utils.bitmapToMat(b, rgba)

        var rgbaInterWindow = Mat()
        var mIntermediateMat = Mat()

        var rows = sizeRgba.height.toInt()
        var cols = sizeRgba.width.toInt()

        val mleft = cols / 8
        val mtop = rows / 8

        val width = cols * 3 / 4
        val height = rows * 3 / 4

        rgbaInterWindow = rgba.submat(mtop, mtop + height, mleft, mleft + width)
        Imgproc.Canny(rgbaInterWindow, mIntermediateMat, 80.0, 90.0)
        Imgproc.cvtColor(mIntermediateMat, rgbaInterWindow, Imgproc.COLOR_GRAY2BGRA, 4)

        Utils.matToBitmap(rgba, b)

        canvas.drawBitmap(b, null, rect, null)