Open nscharrenberg opened 4 years ago
I had a similar issue and was able to resolve it recently. Just to clarify, are you sure the listener is being stopped before you hit "Ask" to start it again?
I had exactly the same problem and took me half day to solve this out. Thanks to @svm1 for hint.
I was following several tutorials where code for assigning listeners was done by using useEffect on mounting component like in code sample above.
useEffect(() => {
Voice.onSpeechStart = _onSpeechStart;
Voice.onSpeechEnd = _onSpeechEnd;
Voice.onSpeechResults = _onSpeechResults;
Voice.onSpeechError = _onSpeechError;
return () => {
Voice.destroy().then(Voice.removeAllListeners).catch(e => {
console.log("UNABLE TO DESTROY");
console.log(e.error);
});
}
}, []);
But when I unmounted my component for voice recognition every second and following mount with new voice recognition was just triggering onSpeechStart event and nothing more. So there is something odd going on. Normally I would say ok, that's a problem of dependency in useEffect, but that would be case for mounting and working only once while second and later voice recognition starts would be deaf due to rerender. Am I right? When I load my component I get 3 rerenders and it works first and any times later. But if I unmount voice recognition component and than mount it again it stay deaf.
I have solved this issue by listeners reassignment before start of voice recognition:
const startRecognition = () => {
console.log('startRecognition')
Voice.onSpeechEnd = onSpeechEnd
Voice.onSpeechResults = onSpeechResults
Voice.onSpeechError = onSpeechError
Voice.onSpeechPartialResults = onSpeechPartialResults
Voice.onSpeechVolumeChanged = onSpeechVolumeChanged
Voice.start('cs-CZ').catch((e) => console.log('ERROR start: ' + e))
}
I am not sure that this is correct approach. Could some one take a look and explain where did I go wrong so we could learn from our mistakes?
import Voice, { SpeechEndEvent, SpeechErrorEvent, SpeechResultsEvent } from '@react-native-community/voice'
import * as Permissions from 'expo-permissions'
import { usePermissions } from 'expo-permissions'
import React, { useEffect, useState } from 'react'
import { Button, StyleSheet, Text, TouchableWithoutFeedback, View } from 'react-native'
const VoicedInput = (): JSX.Element => {
const [index, setIndex] = useState(0)
const [uin, setUin] = useState<string[]>([])
const [speachResult, setSpeachResult] = useState<string[]>(['init', 'value'])
const [isVoiceAvailable, setIsVoiceAvailable] = useState(false)
const [isRecognizing, setIsRecognizing] = useState(false)
const [intervalx, setIntervalx] = useState<NodeJS.Timer | null>(null)
const [permission, askForPermission] = usePermissions(Permissions.AUDIO_RECORDING, { ask: true })
const int = (enabled: boolean) => {
if (enabled) {
const x = setInterval(() => {
console.log('Interval')
Voice.isRecognizing().then((state) => {
setIsRecognizing(!!state)
if (state == 0) {
console.log('here')
clearInterval(x)
}
console.log('state ' + state)
})
}, 1000)
setIntervalx(x)
} else {
if (isRecognizing && intervalx !== null)
clearInterval(intervalx)
}
}
useEffect(() => {
console.log('loading...')
// Voice.onSpeechEnd = onSpeechEnd
// Voice.onSpeechResults = onSpeechResults
// Voice.onSpeechError = onSpeechError
// Voice.onSpeechPartialResults = onSpeechPartialResults
// Voice.onSpeechVolumeChanged = onSpeechVolumeChanged
return () => {
Voice.destroy().then(Voice.removeAllListeners).catch(() => console.log('ERROR Destroy'))
console.log('destroyed')
}
}, [])
if (!permission || permission.status !== 'granted') {
return (
<View>
<Text>Permission is not granted</Text>
<Button title="Grant permission" onPress={askForPermission} />
</View>
)
}
Voice.isAvailable().then(() => setIsVoiceAvailable(true)).catch((e) => { console.log('ERROR isAvailable') })
const startRecognition = () => {
console.log('startRecognition')
Voice.onSpeechEnd = onSpeechEnd
Voice.onSpeechResults = onSpeechResults
Voice.onSpeechError = onSpeechError
Voice.onSpeechPartialResults = onSpeechPartialResults
Voice.onSpeechVolumeChanged = onSpeechVolumeChanged
Voice.start('cs-CZ').catch((e) => console.log('ERROR start: ' + e))
int(true)
}
const stopRecognition = () => {
Voice.stop()
int(false)
setIsRecognizing(false)
}
const onSpeechVolumeChanged = (event) => {
console.log(event.value)
}
const onSpeechResults = (event: SpeechResultsEvent) => {
console.log('onSpeechResults: ' + event.value)
}
const onSpeechPartialResults = (event: SpeechResultsEvent) => {
console.log('onSpeechPartialResults')
if (event.value) {
setSpeachResult(event.value)
}
}
const onSpeechEnd = (event: SpeechEndEvent) => {
console.log('onSpeechEnd')
}
const onSpeechError = (event: SpeechErrorEvent) => {
console.log('onSpeechError' + event.error?.message)
}
console.log('I have rendered')
return (
<View style={{ flex: 1 }}>
{isVoiceAvailable ? <Text style={{ color: 'green' }}>Voice service is available</Text> : <Text style={{ color: 'red' }}>Voice service is unavailable</Text>}
<Text>Result: {speachResult.map((res) => res + ' ')}</Text>
<Text>{'isRecognizing: ' + isRecognizing}</Text>
<Text>{'permission: ' + permission.status}</Text>
{!isRecognizing && isVoiceAvailable && <Button onPress={startRecognition} title={'Start Voice Recognition'} />}
{isRecognizing && isVoiceAvailable && <Button onPress={stopRecognition} title={'Stop'} />}
</View>
)
}
export default VoicedInput
Update: removed unnecessary stuff from example.
Dear @Infrus, Which unnecessary stuff do you mean that I should remove?
@RambousekTomas you are my fucking hero, I was about to throw my laptop out of my seventh floor window.
@nscharrenberg check this, please https://github.com/react-native-voice/voice/issues/299#issuecomment-1988651342
This package doesn't seem to be working correctly on the last react-native and iOS verion. Working properly on the first run, but once it has been stopped, it won't be able to run again. No exceptions are given either.
Does somebody experience the same issue or know what the problem is (my end or something with the package) and how to solve it?
note: It is working as expected for Android.
Expected Behavior
iOS' Voice Recognition should return results about what I said, anytime i call
await Voice.start('en-US');
and stop whenever I callawait Voice.stop();
, and firing the corresponding listeners.Actual Behavior
First time i call
await Voice.start('en-US')
, (i.e. when clicking a button), it returns the partial results and fires the corresponding listeners.However, the second time (and third so forth) when i fire
await Voice.start('en-US')
(i.e. pressing the button again) it doesn't do anything. The listeners aren't fired off either (so no console outputs are given), the start method itself doesn't throw an error but instead returnsundefined
.Steps to Reproduce the Problem
npm i @react-native-community/voice --save
npx pod-install
NSMicrophoneUsageDescription
andNSSpeechRecognitionUsageDescription
to the info.plist like the example has it.const Chatbot = (props) => { const [messages, setMessages] = useState(""); const [answer, setAnswer] = useState(""); const [error, setError] = useState("") const [isListening, setIsListening] = useState(false);
...
useEffect(() => { Voice.onSpeechStart = _onSpeechStart; Voice.onSpeechEnd = _onSpeechEnd; Voice.onSpeechResults = _onSpeechResults; Voice.onSpeechError = _onSpeechError;
}, []);
const _onSpeechStart = () => { console.log("_onSpeechStart"); setMessages(""); setError(""); }
const _onSpeechEnd = () => { console.log("_onSpeechEnd"); }
const _onSpeechResults = (e) => { console.log("_onSpeechResults");
}
const _onSpeechError = (e) => { console.log(_onSpeechError); console.log(e.error); setError(e.error); }
const _stopListening = () => { Voice.stop().then(res => { console.log("Voice Stopped");
}
let timeout; const initDelay = 3000; const continueDelay = 300;
const handleTimeout = () => { _stopListening(); }
const _startListening = () => { setMessages(""); setError("");
}
const _initSpeech = () => { if(isListening) { _stopListening(); } else { _startListening(); } } ...
return (
) }
const mapStateToProps = () => ({ ... })
const mapDispatchToProps = { ... }
export default connect(mapStateToProps, mapDispatchToProps)(Chatbot)