NeutrinosPlatform / cordova-plugin-ml-text

Official site for the new plugin. Old Source hosted in bitbucket. See link
https://bitbucket.org/bhivedevs/cordova-plugin-ml-text
MIT License
14 stars 16 forks source link

Use it with Ionic Capacitor - How I made it #6

Closed elnezah closed 3 years ago

elnezah commented 3 years ago

It took me a while to make it work in Ionic Capacitor, so I am going to leave here how I made it, maybe it helps somebody:

Installation

Follow these steps:

cordova plugin add cordova-plugin-ml-text

This will add two lines into package.json:

Make sure you did not install the plugin from bitbucket, as this other needs further Firebase configuration. You want your plugin to come from github!

npx jetifier

Otherwise you will get error with the compatibility libraries. This is something you have to do very often with Capacitor.

ionic build

This builds your project together

npx cap sync

Will copy all new dependencies to the Android and iOS folder. I use this rather than npx cap copy as this second has given me headaches sometime.

Usage

Now you need to access the plugin, which is not as easy as other Ionic plugins, because the plugin stays the same, but the wrapper for Ionic is not there. First thing to do, declare de variable mltext on top of your ts file where you want to use it, above @Component. Like this:

declare var mltext;

@Component({
...

This variable is void and will not get content until after deviceready event, so you have to listen to this event and then use it. I packed everything inside a function for easy access, and also converted the callbacks to promises, so I can work with the OCR in the usual way I work with other stuff in Ionic. This is my function:

// You need to write the interface OCRResults, otherwise use any instead
private performOCR(imgData: any): Promise<OCRResults> {
        return new Promise<any>((resolve, reject) => {
            document.addEventListener("deviceready", () => {
                const ocrOptions = {imgType: 0, imgSrc: imgData};
                mltext.getText(onSuccess, onFail, ocrOptions);

                function onSuccess(recognizedText) {
                    console.log(TestSandboxPage.TAG, 'OCR success:', {recognizedText});

                    resolve(recognizedText as OCRResults);
                }

                function onFail(message) {
                    reject(message);
                }
            }, false);
        });
    }

Create interfaces to help your IDE help you

Finally, as you can see in my function, I don't work with anys. Instead I created some interfaces that I place on top of my file, to get quick typing help from the IDE and help myself remember how the OCR results are build. Here my interfaces:

interface OCRResults {
    blocks: {
        blockframe: Frame[];
        blockpoints: Points[];
        blocktext: string[];
    };
    lines: {
        lineframe: Frame[];
        linepoints: Points[];
        linetext: string[];
    };
    words: {
        wordframe: Frame[];
        wordpoints: Points[];
        wordtext: string[];
    };
}

interface Points {
    x1: number | string;
    x2: number | string;
    x3: number | string;
    x4: number | string;
    y1: number | string;
    y2: number | string;
    y3: number | string;
    y4: number | string;
}

interface Frame {
    x: number | string;
    y: number | string;
    height: number | string;
    width: number | string;
}

It worked for me on Android and iOS

ChrisTomAlx commented 3 years ago

Thank you @elnezah for leaving this for others. I have pinned the issue to the repository.

Cheers and have a nice day :) Chris Neutrinos

AneudysOrtiz commented 3 years ago

Thanks, it worked specially on iOS where I was getting the following error uploading it to the App Store:

ITMS-90809: Deprecated API Usage - New apps that use UIWebView are no longer accepted.