It took me a while to make it work in Ionic Capacitor, so I am going to leave here how I made it, maybe it helps somebody:
Installation
Follow these steps:
cordova plugin add cordova-plugin-ml-text
This will add two lines into package.json:
"cordova-plugin-ml-text": "^2.0.0" in dependencies
"cordova-plugin-ml-text": {"MLKIT_TEXT_RECOGNITION_VERSION": "16.1.0"} in cordova.plugins
Make sure you did not install the plugin from bitbucket, as this other needs further Firebase configuration. You want your plugin to come from github!
npx jetifier
Otherwise you will get error with the compatibility libraries. This is something you have to do very often with Capacitor.
ionic build
This builds your project together
npx cap sync
Will copy all new dependencies to the Android and iOS folder. I use this rather than npx cap copy as this second has given me headaches sometime.
Usage
Now you need to access the plugin, which is not as easy as other Ionic plugins, because the plugin stays the same, but the wrapper for Ionic is not there. First thing to do, declare de variable mltext on top of your ts file where you want to use it, above @Component. Like this:
declare var mltext;
@Component({
...
This variable is void and will not get content until after deviceready event, so you have to listen to this event and then use it. I packed everything inside a function for easy access, and also converted the callbacks to promises, so I can work with the OCR in the usual way I work with other stuff in Ionic. This is my function:
// You need to write the interface OCRResults, otherwise use any instead
private performOCR(imgData: any): Promise<OCRResults> {
return new Promise<any>((resolve, reject) => {
document.addEventListener("deviceready", () => {
const ocrOptions = {imgType: 0, imgSrc: imgData};
mltext.getText(onSuccess, onFail, ocrOptions);
function onSuccess(recognizedText) {
console.log(TestSandboxPage.TAG, 'OCR success:', {recognizedText});
resolve(recognizedText as OCRResults);
}
function onFail(message) {
reject(message);
}
}, false);
});
}
Create interfaces to help your IDE help you
Finally, as you can see in my function, I don't work with anys. Instead I created some interfaces that I place on top of my file, to get quick typing help from the IDE and help myself remember how the OCR results are build. Here my interfaces:
interface OCRResults {
blocks: {
blockframe: Frame[];
blockpoints: Points[];
blocktext: string[];
};
lines: {
lineframe: Frame[];
linepoints: Points[];
linetext: string[];
};
words: {
wordframe: Frame[];
wordpoints: Points[];
wordtext: string[];
};
}
interface Points {
x1: number | string;
x2: number | string;
x3: number | string;
x4: number | string;
y1: number | string;
y2: number | string;
y3: number | string;
y4: number | string;
}
interface Frame {
x: number | string;
y: number | string;
height: number | string;
width: number | string;
}
It took me a while to make it work in Ionic Capacitor, so I am going to leave here how I made it, maybe it helps somebody:
Installation
Follow these steps:
cordova plugin add cordova-plugin-ml-text
This will add two lines into
package.json
:"cordova-plugin-ml-text": "^2.0.0"
in dependencies"cordova-plugin-ml-text": {"MLKIT_TEXT_RECOGNITION_VERSION": "16.1.0"}
in cordova.pluginsMake sure you did not install the plugin from bitbucket, as this other needs further Firebase configuration. You want your plugin to come from github!
npx jetifier
Otherwise you will get error with the compatibility libraries. This is something you have to do very often with Capacitor.
ionic build
This builds your project together
npx cap sync
Will copy all new dependencies to the Android and iOS folder. I use this rather than
npx cap copy
as this second has given me headaches sometime.Usage
Now you need to access the plugin, which is not as easy as other Ionic plugins, because the plugin stays the same, but the wrapper for Ionic is not there. First thing to do, declare de variable
mltext
on top of your ts file where you want to use it, above@Component
. Like this:This variable is void and will not get content until after
deviceready
event, so you have to listen to this event and then use it. I packed everything inside a function for easy access, and also converted the callbacks to promises, so I can work with the OCR in the usual way I work with other stuff in Ionic. This is my function:Create interfaces to help your IDE help you
Finally, as you can see in my function, I don't work with
any
s. Instead I created some interfaces that I place on top of my file, to get quick typing help from the IDE and help myself remember how the OCR results are build. Here my interfaces:It worked for me on Android and iOS