This is the plugin demo in action..
..while recognizing Dutch π³π± | .. after recognizing American-English πΊπΈ |
---|---|
From the command prompt go to your app's root folder and execute:
ns plugin add nativescript-speech-recognition
tns plugin add nativescript-speech-recognition@1.5.0
You'll need to test this on a real device as a Simulator/Emulator doesn't have speech recognition capabilities.
available
Depending on the OS version a speech engine may not be available.
// require the plugin
var SpeechRecognition = require("nativescript-speech-recognition").SpeechRecognition;
// instantiate the plugin
var speechRecognition = new SpeechRecognition();
speechRecognition.available().then(
function(available) {
console.log(available ? "YES!" : "NO");
}
);
// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";
class SomeClass {
private speechRecognition = new SpeechRecognition();
public checkAvailability(): void {
this.speechRecognition.available().then(
(available: boolean) => console.log(available ? "YES!" : "NO"),
(err: string) => console.log(err)
);
}
}
requestPermission
You can either let startListening
handle permissions when needed, but if you want to have more control
over when the permission popups are shown, you can use this function:
this.speechRecognition.requestPermission().then((granted: boolean) => {
console.log("Granted? " + granted);
});
startListening
On iOS this will trigger two prompts:
The first prompt requests to allow Apple to analyze the voice input. The user will see a consent screen which you can extend with your own message by adding a fragment like this to app/App_Resources/iOS/Info.plist
:
<key>NSSpeechRecognitionUsageDescription</key>
<string>My custom recognition usage description. Overriding the default empty one in the plugin.</string>
The second prompt requests access to the microphone:
<key>NSMicrophoneUsageDescription</key>
<string>My custom microphone usage description. Overriding the default empty one in the plugin.</string>
// import the options
import { SpeechRecognitionTranscription } from "nativescript-speech-recognition";
this.speechRecognition.startListening(
{
// optional, uses the device locale by default
locale: "en-US",
// set to true to get results back continuously
returnPartialResults: true,
// this callback will be invoked repeatedly during recognition
onResult: (transcription: SpeechRecognitionTranscription) => {
console.log(`User said: ${transcription.text}`);
console.log(`User finished?: ${transcription.finished}`);
},
onError: (error: string | number) => {
// because of the way iOS and Android differ, this is either:
// - iOS: A 'string', describing the issue.
// - Android: A 'number', referencing an 'ERROR_*' constant from https://developer.android.com/reference/android/speech/SpeechRecognizer.
// If that code is either 6 or 7 you may want to restart listening.
}
}
).then(
(started: boolean) => { console.log(`started listening`) },
(errorMessage: string) => { console.log(`Error: ${errorMessage}`); }
).catch((error: string | number) => {
// same as the 'onError' handler, but this may not return if the error occurs after listening has successfully started (because that resolves the promise,
// hence the' onError' handler was created.
});
If you're using this plugin in Angular, then note that the onResult
callback is not part of Angular's lifecycle.
So either update the UI in an ngZone
as shown here,
or use ChangeDetectorRef
as shown here.
stopListening
this.speechRecognition.stopListening().then(
() => { console.log(`stopped listening`) },
(errorMessage: string) => { console.log(`Stop error: ${errorMessage}`); }
);
This plugin is part of the plugin showcase app I built using Angular.
Rather watch a video? Check out this tutorial on YouTube.