Open newyann666 opened 1 year ago
I notice you have onDevice: true
commented out. Had you tried it and then commented it out for some reason? Without an Internet connection only on device processing will work as the default speech processing on Android uses the server. On device mode only works on some modern devices.
Thanks for the response. Yes i've tried it with and without, both doesn't work in offline mode. Do you have a list of devices that working with 'onDevice' mode or a trick to force Android to use his speech processing server ?
Unfortunately I haven't yet been able to find a list of devices that support it. That error might indicate that it's not supported on the device, but I'm surprised that something as new as a Samsung A33 wouldn't have support.
Could you try the current version from the repo for me? I added some logging so that it will show Setting on device listener
if the device claims it is supported and you ask for on device processing.
Use version 6.1.1.
Here is the log make with the 6.1.1 log's changes
Execution resume for the 4 tests : only online mode with onDevice: false
works
Galaxy A33 - onDevice: false
Air plane mode Off
flutter (15880): 2022-10-30T09:28:41.315763 Initialize SpeechToTextPlugin(15880): Start initialize SpeechToTextPlugin(15880): Checked permission SpeechToTextPlugin(15880): has permission, completing SpeechToTextPlugin(15880): completeInitialize SpeechToTextPlugin(15880): Testing recognition availability BluetoothHeadset(15880): BTStateChangeCB is registed by 15880 @ fr.... SpeechToTextPlugin(15880): sending result SpeechToTextPlugin(15880): leaving complete SpeechToTextPlugin(15880): leaving initializeIfPermitted SpeechToTextPlugin(15880): Received extra language broadcast SpeechToTextPlugin(15880): Extra supported languages flutter (15880): 2022-10-30T09:28:43.639008 start listening SpeechToTextPlugin(15880): Start listening SpeechToTextPlugin(15880): setupRecognizerIntent SpeechToTextPlugin(15880): Notify status:listening SpeechToTextPlugin(15880): Start listening done flutter (15880): 2022-10-30T09:28:43.651826 Received listener status: listening, listening: true SpeechToTextPlugin(15880): rmsDB -2.0 / -2.0 SpeechToTextPlugin(15880): Calling results callback SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 flutter (15880): 2022-10-30T09:28:45.358893 Result listener final: false, words: SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 SpeechToTextPlugin(15880): Calling results callback flutter (15880): 2022-10-30T09:28:45.924355 Result listener final: false, words: bonjour SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 SpeechToTextPlugin(15880): Notify status:notListening SpeechToTextPlugin(15880): Notify status:done SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 flutter (15880): 2022-10-30T09:28:46.275742 Received listener status: notListening, listening: false SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 SpeechToTextPlugin(15880): Calling results callback flutter (15880): 2022-10-30T09:28:46.460058 Result listener final: true, words: bonjour flutter (15880): 2022-10-30T09:28:46.460601 Received listener status: done, listening: false
Air plane mode On
flutter (15880): 2022-10-30T09:31:12.786412 Initialize SpeechToTextPlugin(15880): Start initialize SpeechToTextPlugin(15880): Checked permission SpeechToTextPlugin(15880): has permission, completing SpeechToTextPlugin(15880): completeInitialize SpeechToTextPlugin(15880): Testing recognition availability BluetoothHeadset(15880): BTStateChangeCB is registed by 15880 @ fr.okteo.comptagebovins.comptagebovins SpeechToTextPlugin(15880): sending result SpeechToTextPlugin(15880): leaving complete SpeechToTextPlugin(15880): leaving initializeIfPermitted SpeechToTextPlugin(15880): Received extra language broadcast SpeechToTextPlugin(15880): Extra supported languages I/flutter (15880): 2022-10-30T09:31:15.799271 start listening D/SpeechToTextPlugin(15880): Start listening D/SpeechToTextPlugin(15880): setupRecognizerIntent D/SpeechToTextPlugin(15880): Notify status:listening D/SpeechToTextPlugin(15880): Start listening done I/flutter (15880): 2022-10-30T09:31:15.883082 Received listener status: listening, listening: true D/SpeechToTextPlugin(15880): rmsDB -2.0 / -2.0 D/SpeechToTextPlugin(15880): Stop listening D/SpeechToTextPlugin(15880): Notify status:notListening D/SpeechToTextPlugin(15880): Notify status:doneNoResult I/flutter (15880): 2022-10-30T09:31:18.891705 Received listener status: notListening, listening: false D/SpeechToTextPlugin(15880): Stop listening done I/flutter (15880): 2022-10-30T09:31:18.892473 Received listener status: done, listening: false D/SpeechToTextPlugin(15880): Error 7 after start at 3193 -2.0 / 10.0 I/flutter (15880): 2022-10-30T09:31:19.010493 Received error status: SpeechRecognitionError msg: error_no_match, permanent: true, listening: false
Galaxy A33 - onDevice: true
Air plane mode Off
flutter (15880): 2022-10-30T09:39:40.841973 Initialize SpeechToTextPlugin(15880): Start initialize SpeechToTextPlugin(15880): Checked permission SpeechToTextPlugin(15880): has permission, completing SpeechToTextPlugin(15880): completeInitialize SpeechToTextPlugin(15880): Testing recognition availability BluetoothHeadset(15880): BTStateChangeCB is registed by 15880 @ fr.... SpeechToTextPlugin(15880): sending result SpeechToTextPlugin(15880): leaving complete SpeechToTextPlugin(15880): leaving initializeIfPermitted SpeechToTextPlugin(15880): Received extra language broadcast SpeechToTextPlugin(15880): Extra supported languages flutter (15880): 2022-10-30T09:39:43.403918 start listening SpeechToTextPlugin(15880): Start listening SpeechToTextPlugin(15880): setupRecognizerIntent SpeechToTextPlugin(15880): Notify status:listening SpeechToTextPlugin(15880): Start listening done flutter (15880): 2022-10-30T09:39:43.417845 Received listener status: listening, listening: true SpeechToTextPlugin(15880): rmsDB -2.0 / -2.0 SpeechToTextPlugin(15880): Stop listening SpeechToTextPlugin(15880): Notify status:notListening SpeechToTextPlugin(15880): Notify status:doneNoResult flutter (15880): 2022-10-30T09:39:46.445745 Received listener status: notListening, listening: false SpeechToTextPlugin(15880): Stop listening done flutter (15880): 2022-10-30T09:39:46.446439 Received listener status: done, listening: false SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 SpeechToTextPlugin(15880): Error 7 after start at 3065 -2.0 / 10.0 flutter (15880): 2022-10-30T09:39:46.497397 Received error status: SpeechRecognitionError msg: error_no_match, permanent: true, listening: false
Air plane mode On
flutter (15880): 2022-10-30T09:37:07.190991 Initialize SpeechToTextPlugin(15880): Start initialize SpeechToTextPlugin(15880): Checked permission SpeechToTextPlugin(15880): has permission, completing SpeechToTextPlugin(15880): completeInitialize SpeechToTextPlugin(15880): Testing recognition availability BluetoothHeadset(15880): BTStateChangeCB is registed by 15880 @ fr.... SpeechToTextPlugin(15880): sending result SpeechToTextPlugin(15880): leaving complete SpeechToTextPlugin(15880): leaving initializeIfPermitted SpeechToTextPlugin(15880): Received extra language broadcast SpeechToTextPlugin(15880): Extra supported languages flutter (15880): 2022-10-30T09:37:10.524952 start listening SpeechToTextPlugin(15880): before setup intent SpeechToTextPlugin(15880): setupRecognizerIntent SpeechToTextPlugin(15880): after setup intent SpeechToTextPlugin(15880): Start listening SpeechToTextPlugin(15880): setupRecognizerIntent SpeechToTextPlugin(15880): Notify status:listening SpeechToTextPlugin(15880): Start listening done SpeechToTextPlugin(15880): Creating recognizer SpeechToTextPlugin(15880): Setting default listener SpeechToTextPlugin(15880): In RecognizerIntent apply SpeechToTextPlugin(15880): put model SpeechToTextPlugin(15880): put package SpeechToTextPlugin(15880): put partial SpeechToTextPlugin(15880): In RecognizerIntent apply SpeechToTextPlugin(15880): put model SpeechToTextPlugin(15880): put package SpeechToTextPlugin(15880): put partial SpeechToTextPlugin(15880): put languageTag flutter (15880): 2022-10-30T09:37:10.595476 Received listener status: listening, listening: true SpeechToTextPlugin(15880): rmsDB -2.0 / -2.0 SpeechToTextPlugin(15880): Stop listening SpeechToTextPlugin(15880): Notify status:notListening SpeechToTextPlugin(15880): Notify status:doneNoResult flutter (15880): 2022-10-30T09:37:13.625211 Received listener status: notListening, listening: false SpeechToTextPlugin(15880): Stop listening done flutter (15880): 2022-10-30T09:37:13.626058 Received listener status: done, listening: false SpeechToTextPlugin(15880): rmsDB -2.0 / 10.0 SpeechToTextPlugin(15880): Error 7 after start at 3111 -2.0 / 10.0 flutter (15880): 2022-10-30T09:37:13.668427 Received error status: SpeechRecognitionError msg: error_no_match, permanent: true, listening: false
I tested on a Moto g9 power (Android 11), using the example of version 6.1.1, and I also received an error message when trying to use it with OnDevice: true, both online and offline. I tried with Portuguese (Brazil) language.
It works fine when OnDevice: false.
Hi guys. I think I have found a workaround that have worked previously on an old version, and it seems to work in this version as well:
I changed the plugin code to:
Future listen( {SpeechResultListener? onResult, Duration? listenFor, Duration? pauseFor, SpeechSoundLevelChange? onSoundLevelChange, **String? localeId = 'pt_BR'**, cancelOnError = false, partialResults = true, **onDevice = true,** [...]
Additionally, on my own code, I suppressed the localeId variable:
speech.listen( onResult: resultListener, listenFor: Duration(seconds: 10), pauseFor: Duration(seconds: 3), partialResults: false, **// localeId: 'pt_BR',** onSoundLevelChange: soundLevelListener, cancelOnError: true, onDevice: true, [...]
Not sure why does this works, but it is a good idea to test in your phone @newyann666.
Hello all,
Thanks @dnsprado for the response. I'm trying to make changes like you show me and it works !!! in both mode (online and offline (air plane mode On) !!! It's great because the offline mode is obligatory for me.
@sowens-csd , This changes will be applied in the code or maybe a fix other value must be passed to the 'listen' method ?
var systemLocale = await speech.systemLocale();
_currentLocaleId = systemLocale?.localeId ?? '';
For me the _currentLocaleId contains 'fr_FR'
I'm testing this now on a Pixel 6a. Even with the suggested changes I can't get offline recognition working. It keeps giving me the language unavailable error result. Even though I've installed the relevant offline language packs. Can you paste the exact code changes you made because so far I can't reproduce your result.
I've made some headway, at least I have seen it work properly. The bad news is that Android 13 is going to be a problem. They've removed the method that used to work to retrieve the available locales. Working on that now. I'd still like to see your code though because I don't understand why your version would work with those changes.
localeId is one candidate for why the change given above by @dnsprado works. The format of the localId parameter is shown in the Android docs as:
Optional IETF language tag (as defined by BCP 47), for example "en-US".
While in the snippet shown above both of you used an _ rather than a -. Possibilities are that the format is different for on device recognition, or that by using an incorrect format it went to some default value.
I'm testing this now on a Pixel 6a. Even with the suggested changes I can't get offline recognition working. It keeps giving me the language unavailable error result. Even though I've installed the relevant offline language packs. Can you paste the exact code changes you made because so far I can't reproduce your result.
Hello, Like @dnsprado recommend, comment the localeId parameter for listen method call. Here's the changes in "speech_to_text.dart" :
` Future listen( {SpeechResultListener? onResult, Duration? listenFor, Duration? pauseFor, //String? localeId, SpeechSoundLevelChange? onSoundLevelChange, cancelOnError = false, partialResults = true, onDevice = false, ListenMode listenMode = ListenMode.confirmation, sampleRate = 0}) async { if (!_initWorked) { throw SpeechToTextNotInitializedException(); } _lastRecognized = ''; ...... try { var started = await SpeechToTextPlatform.instance.listen( partialResults: partialResults || null != pauseFor, onDevice: onDevice, listenMode: listenMode.index, sampleRate: sampleRate //,localeId: localeId); );
and in the main.dart :
`
void startListening() { _logEvent('start listening'); lastWords = ''; lastError = ''; final pauseFor = int.tryParse(_pauseForController.text); final listenFor = int.tryParse(_listenForController.text); speech.listen( onResult: resultListener, listenFor: Duration(seconds: listenFor ?? 30), pauseFor: Duration(seconds: pauseFor ?? 3), partialResults: true, //localeId: _currentLocaleId, onSoundLevelChange: soundLevelListener, cancelOnError: true, listenMode: ListenMode.confirmation, onDevice: true ); setState(() {}); } `
This is very helpful, thanks for posting. The problem appears to happen when I try to set a non-default locale when also doing on device processing. By commenting out the localeId
it means that it will always be the default, which, in the Android code, doesn't set the locale at all. You should be able to get the same result without changing the library code, just don't send the localeId parameter from the client app.
However, it should be possible to set the language for local processing so I'm now looking into why that's not working.
Well this is super frustrating. So far it seems like on device processing does not support setting the language code. It works in the device's default language but not in any other language. It may well be that I have not yet found the magic to install the offline language pack, though I've tried a bunch of things. The most promising was in
Settings | System | Languages & input | Voice input
From there you can access Speech Services for Google Offline speech
and Add a language
. However, even after adding several languages trying to set the language code for them results in an error 13 response. The error code implies that the language is supported but is not available. The same language code works properly for non on device processing.
Well this is super frustrating. So far it seems like on device processing does not support setting the language code. It works in the device's default language but not in any other language. It may well be that I have not yet found the magic to install the offline language pack, though I've tried a bunch of things. The most promising was in
Settings | System | Languages & input | Voice input
From there you can access Speech Services for GoogleOffline speech
andAdd a language
. However, even after adding several languages trying to set the language code for them results in an error 13 response. The error code implies that the language is supported but is not available. The same language code works properly for non on device processing.
In order to install the offine language pack, launch the google app, tap on your account picture on the top right, choose "parameters", then "Voice" and finally choose "offline voice recognition"
Thanks for the input. I had not found that location. Unfortunately it appears to be another way to get to the same location that I had already tried, it had the language packs set. So I'm still stuck at on device recognition working for the default language but not for any other.
Hi guys, have you tried to use it offline in Android 12? The workaround is not working in this version =/
Hi guys, have you tried to use it offline in Android 12? The workaround is not working in this version =/
I'm gonna test the "offline mode workaround" in a few days on Saumsung A33 device (Android 12). I'll post the result here.
Sup guys, any updates on this Android12 issue?
did you try finalTimeout parameter for speech.initialize()? the default timeout is 2 sec, so you may see the speech failing just a moment.
i recommend to set finalTimeout 10 seconds.
Hello,
Before expose my problem, i want to thanks all contributors of this great project !
I need to use this dependencies in my app where network wifi and/or others are not present or very very slow. So i try to use when air plane mode is enabled and it's not working. I took the example and i run it :
`import 'dart:async'; import 'dart:math';
import 'package:flutter/material.dart'; import 'package:speech_to_text/speech_recognition_error.dart'; import 'package:speech_to_text/speech_recognition_result.dart'; import 'package:speech_to_text/speech_to_text.dart';
void main() => runApp(SpeechSampleApp());
class SpeechSampleApp extends StatefulWidget { @override _SpeechSampleAppState createState() => _SpeechSampleAppState(); }
/// An example that demonstrates the basic functionality of the /// SpeechToText plugin for using the speech recognition capability /// of the underlying platform. class _SpeechSampleAppState extends State {
bool _hasSpeech = false;
bool _logEvents = false;
final TextEditingController _pauseForController =
TextEditingController(text: '3');
final TextEditingController _listenForController =
TextEditingController(text: '30');
double level = 0.0;
double minSoundLevel = 50000;
double maxSoundLevel = -50000;
String lastWords = '';
String lastError = '';
String lastStatus = '';
String _currentLocaleId = '';
List _localeNames = [];
final SpeechToText speech = SpeechToText();
@override void initState() { super.initState(); }
/// This initializes SpeechToText. That only has to be done /// once per application, though calling it again is harmless /// it also does nothing. The UX of the sample app ensures that /// it can only be called once. Future initSpeechState() async {
_logEvent('Initialize');
try {
var hasSpeech = await speech.initialize(
onError: errorListener,
onStatus: statusListener,
debugLogging: true,
);
if (hasSpeech) {
// Get the list of languages installed on the supporting platform so they
// can be displayed in the UI for selection by the user.
_localeNames = await speech.locales();
}
@override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: const Text('Speech to Text Example'), ), body: Column(children: [ HeaderWidget(), Container( child: Column( children:[
InitSpeechWidget(_hasSpeech, initSpeechState),
SpeechControlWidget(_hasSpeech, speech.isListening,
startListening, stopListening, cancelListening),
SessionOptionsWidget(
_currentLocaleId,
_switchLang,
_localeNames,
_logEvents,
_switchLogging,
_pauseForController,
_listenForController,
),
],
),
),
Expanded(
flex: 4,
child: RecognitionResultsWidget(lastWords: lastWords, level: level),
),
Expanded(
flex: 1,
child: ErrorWidget(lastError: lastError),
),
SpeechStatusWidget(speech: speech),
]),
),
);
}
// This is called each time the users wants to start a new speech // recognition session void startListening() { _logEvent('start listening'); lastWords = ''; lastError = ''; final pauseFor = int.tryParse(_pauseForController.text); final listenFor = int.tryParse(_listenForController.text); // Note that
listenFor
is the maximum, not the minimun, on some // systems recognition will be stopped before this value is reached. // SimilarlypauseFor
is a maximum not a minimum and may be ignored // on some devices. speech.listen( onResult: resultListener, listenFor: Duration(seconds: listenFor ?? 30), pauseFor: Duration(seconds: pauseFor ?? 3), partialResults: true, localeId: _currentLocaleId, onSoundLevelChange: soundLevelListener, cancelOnError: true, listenMode: ListenMode.confirmation, //onDevice: true ); setState(() {}); }void stopListening() { _logEvent('stop'); speech.stop(); setState(() { level = 0.0; }); }
void cancelListening() { _logEvent('cancel'); speech.cancel(); setState(() { level = 0.0; }); }
/// This callback is invoked each time new recognition results are /// available after
listen
is called. void resultListener(SpeechRecognitionResult result) { _logEvent( 'Result listener final: ${result.finalResult}, words: ${result.recognizedWords}'); setState(() { lastWords = '${result.recognizedWords} - ${result.finalResult}'; }); }void soundLevelListener(double level) { minSoundLevel = min(minSoundLevel, level); maxSoundLevel = max(maxSoundLevel, level); // _logEvent('sound level $level: $minSoundLevel - $maxSoundLevel '); setState(() { this.level = level; }); }
void errorListener(SpeechRecognitionError error) { _logEvent( 'Received error status: $error, listening: ${speech.isListening}'); setState(() { lastError = '${error.errorMsg} - ${error.permanent}'; }); }
void statusListener(String status) { _logEvent( 'Received listener status: $status, listening: ${speech.isListening}'); setState(() { lastStatus = '$status'; }); }
void _switchLang(selectedVal) { setState(() { _currentLocaleId = selectedVal; }); print(selectedVal); }
void _logEvent(String eventDescription) { if (_logEvents) { var eventTime = DateTime.now().toIso8601String(); print('$eventTime $eventDescription'); } }
void _switchLogging(bool? val) { setState(() { _logEvents = val ?? false; }); } }
/// Displays the most recently recognized words and the sound level. class RecognitionResultsWidget extends StatelessWidget { const RecognitionResultsWidget({ Key? key, required this.lastWords, required this.level, }) : super(key: key);
final String lastWords; final double level;
@override Widget build(BuildContext context) { return Column( children:[
Center(
child: Text(
'Recognized Words',
style: TextStyle(fontSize: 22.0),
),
),
Expanded(
child: Stack(
children: [
Container(
color: Theme.of(context).selectedRowColor,
child: Center(
child: Text(
lastWords,
textAlign: TextAlign.center,
),
),
),
Positioned.fill(
bottom: 10,
child: Align(
alignment: Alignment.bottomCenter,
child: Container(
width: 40,
height: 40,
alignment: Alignment.center,
decoration: BoxDecoration(
boxShadow: [
BoxShadow(
blurRadius: .26,
spreadRadius: level * 1.5,
color: Colors.black.withOpacity(.05))
],
color: Colors.white,
borderRadius: BorderRadius.all(Radius.circular(50)),
),
child: IconButton(
icon: Icon(Icons.mic),
onPressed: () => null,
),
),
),
),
],
),
),
],
);
}
}
class HeaderWidget extends StatelessWidget { const HeaderWidget({ Key? key, }) : super(key: key);
@override Widget build(BuildContext context) { return Center( child: Text( 'Speech recognition available', style: TextStyle(fontSize: 22.0), ), ); } }
/// Display the current error status from the speech /// recognizer class ErrorWidget extends StatelessWidget { const ErrorWidget({ Key? key, required this.lastError, }) : super(key: key);
final String lastError;
@override Widget build(BuildContext context) { return Column( children:[
Center(
child: Text(
'Error Status',
style: TextStyle(fontSize: 22.0),
),
),
Center(
child: Text(lastError),
),
],
);
}
}
/// Controls to start and stop speech recognition class SpeechControlWidget extends StatelessWidget { const SpeechControlWidget(this.hasSpeech, this.isListening, this.startListening, this.stopListening, this.cancelListening, {Key? key}) : super(key: key);
final bool hasSpeech; final bool isListening; final void Function() startListening; final void Function() stopListening; final void Function() cancelListening;
@override Widget build(BuildContext context) { return Row( mainAxisAlignment: MainAxisAlignment.spaceAround, children:[
TextButton(
onPressed: !hasSpeech || isListening ? null : startListening,
child: Text('Start'),
),
TextButton(
onPressed: isListening ? stopListening : null,
child: Text('Stop'),
),
TextButton(
onPressed: isListening ? cancelListening : null,
child: Text('Cancel'),
)
],
);
}
}
class SessionOptionsWidget extends StatelessWidget { const SessionOptionsWidget( this.currentLocaleId, this.switchLang, this.localeNames, this.logEvents, this.switchLogging, this.pauseForController, this.listenForController, {Key? key}) : super(key: key);
final String currentLocaleId; final void Function(String?) switchLang; final void Function(bool?) switchLogging; final TextEditingController pauseForController; final TextEditingController listenForController; final List localeNames;
final bool logEvents;
@override Widget build(BuildContext context) { return Padding( padding: const EdgeInsets.all(8.0), child: Column( mainAxisAlignment: MainAxisAlignment.spaceBetween, children:[
Row(
children: [
Text('Language: '),
DropdownButton(
onChanged: (selectedVal) => switchLang(selectedVal),
value: currentLocaleId,
items: localeNames
.map(
(localeName) => DropdownMenuItem(
value: localeName.localeId,
child: Text(localeName.name),
),
)
.toList(),
),
],
),
Row(
children: [
Text('pauseFor: '),
Container(
padding: EdgeInsets.only(left: 8),
width: 80,
child: TextFormField(
controller: pauseForController,
)),
Container(
padding: EdgeInsets.only(left: 16),
child: Text('listenFor: ')),
Container(
padding: EdgeInsets.only(left: 8),
width: 80,
child: TextFormField(
controller: listenForController,
)),
],
),
Row(
children: [
Text('Log events: '),
Checkbox(
value: logEvents,
onChanged: switchLogging,
),
],
),
],
),
);
}
}
class InitSpeechWidget extends StatelessWidget { const InitSpeechWidget(this.hasSpeech, this.initSpeechState, {Key? key}) : super(key: key);
final bool hasSpeech; final Future Function() initSpeechState;
@override Widget build(BuildContext context) { return Row( mainAxisAlignment: MainAxisAlignment.spaceAround, children:[
TextButton(
onPressed: hasSpeech ? null : initSpeechState,
child: Text('Initialize'),
),
],
);
}
}
/// Display the current status of the listener class SpeechStatusWidget extends StatelessWidget { const SpeechStatusWidget({ Key? key, required this.speech, }) : super(key: key);
final SpeechToText speech;
@override Widget build(BuildContext context) { return Container( padding: EdgeInsets.symmetric(vertical: 20), color: Theme.of(context).backgroundColor, child: Center( child: speech.isListening ? Text( "I'm listening...", style: TextStyle(fontWeight: FontWeight.bold), ) : Text( 'Not listening', style: TextStyle(fontWeight: FontWeight.bold), ), ), ); } }`
it works like a charm when i have network but not when air plane mode is enabled. Note that i've download the google's offline speech recognition packs for my device language. Here is the logs in Android Studio :
Air plane mode enabled, after tap on "Initialize" :
SpeechToTextPlugin fr.... D Start initialize SpeechToTextPlugin fr.... D Checked permission SpeechToTextPlugin fr.... D has permission, completing SpeechToTextPlugin fr.... D completeInitialize SpeechToTextPlugin fr.... D Testing recognition availability SpeechToTextPlugin fr.... D sending result SpeechToTextPlugin fr.... D leaving complete SpeechToTextPlugin fr.... D leaving initializeIfPermitted SpeechToTextPlugin fr.... D Received extra language broadcast SpeechToTextPlugin fr.... D Extra supported languages
After tap on "Start"
flutter fr.... I 2022-10-27T21:09:55.109317 start listening SpeechToTextPlugin fr.... D Start listening SpeechToTextPlugin fr.... D setupRecognizerIntent SpeechToTextPlugin fr.... D Notify status:listening SpeechToTextPlugin fr.... D Start listening done flutter fr.... I 2022-10-27T21:09:55.117853 Received listener status: listening, listening: true SpeechToTextPlugin fr.... D Error 2 after start at 22 1000.0 / -100.0 SpeechToTextPlugin fr.... D Notify status:notListening SpeechToTextPlugin fr.... D Notify status:doneNoResult flutter fr.... I 2022-10-27T21:09:55.172214 Received listener status: notListening, listening: false flutter fr.... I 2022-10-27T21:09:55.172649 Received listener status: done, listening: false flutter fr.... I 2022-10-27T21:09:55.172960 Received error status: SpeechRecognitionError msg: error_network, permanent: true, listening: false SpeechToTextPlugin fr.... D rmsDB -2.0 / -2.0
message on screen : error_network - true
Air plane mode enabled, + adding paramater in code for speech.listen : "onDevice: true" After tap on "Initialize" SpeechToTextPlugin fr.... D Start initialize SpeechToTextPlugin fr.... D Checked permission SpeechToTextPlugin fr.... D has permission, completing SpeechToTextPlugin fr.... D completeInitialize SpeechToTextPlugin fr.... D Testing recognition availability SpeechToTextPlugin fr.... D sending result SpeechToTextPlugin fr.... D leaving complete SpeechToTextPlugin fr.... D leaving initializeIfPermitted SpeechToTextPlugin fr.... D Received extra language broadcast SpeechToTextPlugin fr.... D Extra supported languages After tap on "Start" SpeechToTextPlugin fr.... D before setup intent SpeechToTextPlugin fr.... D setupRecognizerIntent SpeechToTextPlugin fr.... D after setup intent SpeechToTextPlugin fr.... D Start listening SpeechToTextPlugin fr.... D setupRecognizerIntent SpeechToTextPlugin fr.... D Notify status:listening SpeechToTextPlugin fr.... D Start listening done SpeechToTextPlugin fr.... D Creating recognizer SpeechToTextPlugin fr.... D Setting listener SpeechToTextPlugin fr.... D In RecognizerIntent apply SpeechToTextPlugin fr.... D put model SpeechToTextPlugin fr.... D put package SpeechToTextPlugin fr.... D put partial SpeechToTextPlugin fr.... D In RecognizerIntent apply SpeechToTextPlugin fr.... D put model SpeechToTextPlugin fr.... D put package SpeechToTextPlugin fr.... D put partial SpeechToTextPlugin fr.... D put languageTag SpeechToTextPlugin fr.... D Error 5 after start at 21 1000.0 / -100.0 SpeechToTextPlugin fr.... D Notify status:notListening SpeechToTextPlugin fr.... D Notify status:doneNoResult SpeechToTextPlugin fr.... D rmsDB -2.0 / -2.0
message on screen : error_client - true
I try on Samsung Galaxy Note 10 (Android11) and Samsung A33 (Android 12) with the same result :(