Open kmmansour2 opened 3 years ago
@savelee Any advice on this issue? I tried to implement in a different way, and now I get a transcript response but I don't get any query response. Also when I stop recording, it seems the gRPC session continues to listen for audio but eventually times out. in this attempt, I try to use the generated protocol buffers directly, along with your packages. import 'package:dialogflow_grpc/generated/google/cloud/dialogflow/v2/session.pb.dart' as pbSession; import 'package:dialogflow_grpc/generated/google/cloud/dialogflow/v2/session.pbgrpc.dart'
void handleStream() async {
log('entering handleStream');
_recorder.start();
var config = InputConfigV2(
encoding: 'AUDIO_ENCODING_LINEAR_16',
languageCode: 'en-US',
sampleRateHertz: 16000);
if (Platform.isIOS) {
config = InputConfigV2(
encoding: 'AUDIO_ENCODING_LINEAR_16',
languageCode: 'en-US',
sampleRateHertz: 16000);
}
pbSession.QueryInput queryInput = pbSession.QueryInput()..audioConfig = config.cast();
var request = StreamController<pbSession.StreamingDetectIntentRequest>();
request.add(pbSession.StreamingDetectIntentRequest()
..queryInput = queryInput
..session = DialogflowAuth.session
);
_audioStreamSubscription = _recorder.audioStream.listen((audio) {
// Add audio content when stream changes.
request.add(pbSession.StreamingDetectIntentRequest()
..inputAudio = audio);
});
_audioStreamSubscription.onDone(() {
// Close the request stream, if the audio stream is finished.
request.close();
log('closed request');
});
// Make the streamingDetectIntent call, with the InputConfig and the audioStream
// Here I am using session.pbgrpc.dart
var responseStream = dialogflow.client.streamingDetectIntent(request.stream);
String transcript;
String queryText;
String fulfillmentText;
// Get the transcript and detectedIntent and show on screen
responseStream.listen((data) {
log('---- responseStream ----');
// if(data.recognitionResult.)
setState(() {
print(data);
transcript = data.recognitionResult.transcript;
queryText = data.queryResult.queryText;
fulfillmentText = data.queryResult.fulfillmentText;
if (fulfillmentText.isNotEmpty) {
messages.add({
'message': queryText,
'isUserMessage': true,
});
messages.add({
'message': fulfillmentText,
'isUserMessage': false,
});
}
if (transcript.isNotEmpty) {
_textController.text = transcript;
}
});
}, onError: (e) {
print('!!!!!!! error: $e');
}, onDone: () {
log('done');
log('transcript, $transcript');
log('queryText, $queryText');
log('fulfillmentText, $fulfillmentText');
});
}
here is what seems to happen:
void stopStream() async {
setState(() {
_recorder.stop();
log('_recorder.stop() called, recorder stopped');
_isRecording = false;
});
}
I/flutter (32540): !!!!!!! error: gRPC Error (code: 11, codeName: OUT_OF_RANGE, message: While calling Cloud Speech API: Audio Timeout Error: Long duration elapsed without audio. Audio should be sent close to real time., details: [], rawResponse: null)
@kmmansour2 how did you overcome this error below
The argument type 'BehaviorSubject<List<int>?>?' can't be assigned to the parameter type 'Stream<List<int>>
for the line
final responseStream =
dialogflow?.streamingDetectIntent(config, _audioStream);
I tried casting based on inheritance like this:
Stream<List<int>>? _audioStreamCast = _audioStream;
final responseStream =
dialogflow?.streamingDetectIntent(config, _audioStreamCast!);
shown here: https://pub.dev/documentation/rxdart/latest/rx/BehaviorSubject-class.html
I set everything up according to the procedures. When I call streamingDetectIntent I get a gRPC error:
final responseStream = dialogflow.streamingDetectIntent(config, _audioStream);
I/flutter (19109): gRPC Error (code: 11, codeName: OUT_OF_RANGE, message: While calling Cloud Speech API: Audio Timeout Error: Long duration elapsed without audio. Audio should be sent close to real time., details: [], rawResponse: null)