Closed deJong-IT closed 2 years ago
Are you calling stop
after you have the speech you want? What control are you using to play the audio? Does it have an equivalent stop?
Your change doesn't look bad but I'd like to understand why it's necessary in this case when I haven't seen that behaviour before. That will help make sure we don't miss any corner cases with any potential change.
At the moment we call cancel instead of stop, because we don't need the extra callback, but we tested it with both stop and cancel.
Even after we call stop or cancel we still see the following logs:
2021-07-15 12:10:51.933242+0200 Runner[5056:2364011] [plugin] FinishSuccessfully
2021-07-15 12:10:51.933315+0200 Runner[5056:2364011] [plugin] invokeFlutter notifyStatus
2021-07-15 12:10:51.936651+0200 Runner[5056:2364189] flutter: onstatus: "notListening"
So it looks like we get the same FinishSuccessfully more than once.
For playing the audio we use Flutter Sound Lite: https://pub.dev/packages/flutter_sound_lite And yes, we also stop the audio before listening for speech
Other than the log message does the failed deactivation cause any problems? One option would be just to expect that the deactivation will fail sometimes and change the logging from info to trace.
I had a more detailed look at your change and while it should generally work there are cases where the audio session is set active without successfully setting listening
to true. In those cases your change would mean that the plugin was no longer deactivating the audio session. I added all of the extra deactivation code because there were issues with interactions with other users of the audio that were resulting in either other users not being able to play audio or in the speech plugin not being successful in getting new sessions.
The problem isn't the message, but when it happens it stops all running audio, not just speech, but also other (non speech) audio that was started after the speech was already stopped.
So in timeline something like this is happening:
That's really helpful, thanks for giving me the sequence. So at least one issue is that speech detection isn't finishing quickly enough. Are you doing an await speech.stop()
before doing any other sound work?
I wonder if I can make it clean up its audio session immediately so that you could be confident that after stop
completes you could reuse the audio. You said a few seconds later is the time delta really that large? Anything over 100 ms or so would be surprising so I don't need exact numbers, just a good idea of the order of magnitude.
An another question. Are you using the start/stop listening sounds in iOS? I'm asking because they slow down the stop process since it waits for the sound to play.
I just committed a change to the repo that may help. I've added a new status, done
, for the onStatus
callback. This status is only sent once the speech recognizer has shut down all of its use of the audio session. This new status always comes after the existing notListening
status. I'm thinking that if you wait for that done
status before starting to play other sounds that it should work better. If you have a chance to try it please let me know.
About the time. It's really 5-10 seconds, not just 100ms.
We're not using the start/stop sounds
Thanks for the new status, I'll take a look at it and see if it's going to work. I'm not sure if I have the time to do this this week, but I will let you know.
If I still have some issues I can try to make a tiny app that demonstrates the issue
Thanks for responding. 5-10 seconds is way too long, especially since you're not using the start/stop sounds. It really should be in the 10s or maybe 100s of ms in this scenario. I haven't been able to reproduce this kind of delay on any of my devices. What device and OS are you using to test on?
Please do try the new version when you get a chance and let me know.
Okay, I did some additional testing. Here is a quick and dirty test. I downloaded your example and added the flutter_sound_lite package.
What I try to do is looping play audio > wait for speech.
This is main.dart
import 'package:flutter/material.dart';
import 'package:speech_to_text/speech_recognition_error.dart';
import 'package:speech_to_text/speech_recognition_result.dart';
import 'package:speech_to_text/speech_to_text.dart';
import 'package:flutter_sound_lite/flutter_sound.dart';
import 'package:flutter_sound_lite/public/flutter_sound_player.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'TEST',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'TEST'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final SpeechToText _speechToText = SpeechToText();
final FlutterSoundPlayer _player = FlutterSoundPlayer();
String _info = '';
@override
void initState() {
super.initState();
_init();
}
void _loopTest() async {
_info= "***** Starting loop test ***** \n";
if (_player.isPlaying) {
_info += "Stop audio player\n";
await _player.stopPlayer();
}
if (_speechToText.isListening) {
_info += "Cancel Speech\n";
await _speechToText.cancel();
}
_info += "Open Audio Session\n";
await _player.openAudioSession();
String audioUri = "https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_700KB.mp3";
_info += "Start Player\n";
await _player.startPlayer(
fromURI: audioUri,
codec: Codec.mp3,
whenFinished: () {
_info += "Play complete, start listening\n";
_speechToText.listen(onResult: (SpeechRecognitionResult result) {
_info += "Got words: ${result.recognizedWords}\n";
_speechToText.cancel();
_loopTest();
});
}
);
setState(() {});
}
void _init() async {
_info += "Init speech\n";
await _speechToText.initialize(onError: _onError, onStatus: _onStatus);
setState(() {});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: _loopTest,
child: Text('Loop test')),
],
),
Expanded(
child: Column(
children: [
Divider(),
Text(
_info,
),
],
),
),
],
),
),
);
}
void _onStatus(String status) {
_info += "Status: ${status}\n";
setState(() {});
}
void _onError(SpeechRecognitionError errorNotification) {
_info += "Error: ${errorNotification.errorMsg}\n";
setState(() {});
}
}
On Android there is no problem, on iOS you get an error after a few loops.
[plugin] invokeFlutter notifyStatus
[plugin] FinishSuccessfully with error: Optional(Error Domain=kAFAssistantErrorDomain Code=1 "(null)")
[plugin] FinishSuccessfully
[avas] AVAudioSession_iOS.mm:1206 Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
[plugin] Error deactivation: The operation couldn’t be completed. (OSStatus error 560030580.)
[plugin] invokeFlutter notifyStatus
I'm testing on an iPad Air iOS 14.6, and also on an iPhone XR (iOS 14.4)
Thanks a lot for the reproducible case.
It doesn't look like it's using the new 'done' status to me. The change would be to split _loopTest
in two, one for the audio playback and one for speech recognition. Invoke the speech recognition as you're doing now from the playback completion, invoke the playback method only once you've seen a done
status in the _onStatus
callback.
There is however a change I need to make to the done
status before you can do that test. Currently it only works if you speak while the listen
method is active. If there is no speech processed then done
is never sent. I'm working on a fix for that now. If you want to try it in the meantime you'd have to keep speaking during the test.
I just committed the changes that should make the done status stable for the various possible states of speech recognition. I'm currently working to recreate your loop test but having some trouble getting it to compile, something about the flutter_sound_lite dependency. I'll post again once I have it working.
Okay, I have a looping app that can occasionally reproduce the result. I had one reproduce in 263 loops then another run with over 500 without a reproduction. I've committed the app to the repo so if anyone else wants to try check out examples/audio_player_interaction
. I'm not sure whether the failure is from this plugin or the audio player plugin. If anyone wanted to modify this to use a different audio playback plugin that would be interesting.
Note that this is fairly different from the loop test copied above. This one uses events from the underlying plugins to decide when to switch from listening to playing. Since at least the speech recognition is asynchronous with the completion of the method invoked (listen
) I think this is the correct approach.
Thank you a lot for the hard work.
I never had any problems on Android, It's also possible it's a bug in iOS, it wouldn't be the first. I think the main issue is that both the player as the speechdetection use the same audiosession, not sure if there is a workarround for that.
I'm going to take a look at your example and see if I can integrate the same idea in our app. But to be honest, my workarround in the start of this thread still works and I had no complaints about the audio being stopped.
These changes are available now in 5.0.0.
We're creating an app that plays audio, then waits for speech, then plays more audio, wait for speech, plays audio, etc....
Every once in a while we get the following error on iOS:
It looks like the audioSession is stopped, even if the speech is not listening at the moment. The following change in SwiftSpeechToTextPlugin.swift (ln 337) seems to fix the issue:
I'm not sure if I forget something and now something else is broken.