spokestack / react-native-spokestack

Spokestack: give your React Native app a voice interface!
https://spokestack.io
Apache License 2.0
56 stars 13 forks source link

AudioOutputUnitStop Issue on IOS Release #49

Closed GoncaloFazenda closed 5 years ago

GoncaloFazenda commented 5 years ago

Screenshot 2019-05-22 at 11 26 12

noelweichbrodt commented 5 years ago

Hi Goncaloa,

Looks like you're using an older version of the spokestack-ios dependency. This was fixed in spokestack-ios 1.0.11, and the current react-native-spokestack 1.5.8 release has it.

GoncaloFazenda commented 5 years ago

I've updated spokestack but it is not detecting the wakeword (not working) in that version.

noelweichbrodt commented 5 years ago

Hi Goncalo,

Sorry that you’re having problems. If you could post your use case and smokestack configuration, I may be able to help.

On May 22, 2019, at 5:07 PM, GoncaloFazenda notifications@github.com wrote:

I've updated spokestack but it is not detecting the wakeword (not working) in that version.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/pylon/react-native-spokestack/issues/49?email_source=notifications&email_token=AAA5IKEI2GH3ELOKWJ4ZAALPWVORNA5CNFSM4HOTUZN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODV7LQDA#issuecomment-494843916, or mute the thread https://github.com/notifications/unsubscribe-auth/AAA5IKFKQWVL7YIVLSH3NXTPWVORNANCNFSM4HOTUZNQ.

GoncaloFazenda commented 5 years ago

The app is already recognising the wake word but only the first time. After being activated it does not activate again.

It does not happen on the ios emulator, only on the physical device.

Here's our configs :

if (Platform.OS === 'ios') {
    Spokestack.initialize({
        input: 'com.pylon.spokestack.android.MicrophoneInput', // required, provides audio input into the stages\
        stages: [
            'com.pylon.spokestack.webrtc.AutomaticGainControl', // voice activity detection. necessary to trigger speech recognition.
            'com.pylon.spokestack.webrtc.VoiceActivityDetector', // voice activity detection. necessary to trigger speech recognition.
            'com.pylon.spokestack.wakeword.WakewordTrigger',
            'com.pylon.spokestack.google.GoogleSpeechRecognizer' // one of the two supplied speech recognition services
        ],
        properties: {
            'vad-fall-delay': 500,
            // 'pre-emphasis': 0.30,
            'wake-words': 'wakeUP',
            'wake-smooth-length': 20000,
            'wake-phrases': 'wakeUP',
            locale: 'en-US',
            'google-credentials': JSON.stringify(require('./json.json')), // Android-supported api
            // tslint:disable-next-line:max-line-length
            'google-api-key':  'MY KEY', // iOS supported google api
            'trace-level': Spokestack.TraceLevel.DEBUG
        }
    })
    Spokestack.start() // start speech pipeline. can only start after initialize is called.
}

Spokestack.onError = e => {
    Spokestack.stop()
}
Spokestack.onStart = e => { // subscribe to tracing events according to the trace-level property
    // tslint:disable-next-line:no-console
    console.log('onStart')
}

Spokestack.onDeactivate = e => { // subscribe to tracing events according to the trace-level property
    this.setState({ onSpeech: false })
}
Spokestack.onStop = e => { // subscribe to tracing events according to the trace-level property
    // tslint:disable-next-line:no-console
    console.log('onStop')
}

Spokestack.onActivate = e => { // subscribe to tracing events according to the trace-level property
    if (!this.state.onSpeech)this.setState({ onSpeech: true })
}
Spokestack.onTrace = e => { // subscribe to tracing events according to the trace-level property
    // tslint:disable-next-line:no-console
    console.tron.log(e.message)
}
Spokestack.onRecognize = e => {
    alert(e.transcript)
    //some code here 
}

Apple SpeechPipeline init AppleSpeechRecognizer init AppleWakewordRecognizer init AudioController init Apple SpeechPipeline start, context isActive false AudioController startStreaming AppleWakewordRecognizer startStreaming AppleWakewordRecognizer prepareAudioEngine AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask AppleWakewordRecognizer dispatchWorker AppleWakewordRecognizer stopRecognition AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask AppleWakewordRecognizer dispatchWorker AppleWakewordRecognizer stopRecognition AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask AppleWakewordRecognizer dispatchWorker AppleWakewordRecognizer stopRecognition AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask AppleWakewordRecognizer dispatchWorker AppleWakewordRecognizer stopRecognition AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask AppleWakewordRecognizer dispatchWorker AppleWakewordRecognizer stopRecognition AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask AppleWakewordRecognizer dispatchWorker AppleWakewordRecognizer stopRecognition AppleWakewordRecognizer startRecognition AppleWakewordRecognizer createRecognitionTask

GoncaloFazenda commented 5 years ago

After a few debugs heres the updated logs:

Apple SpeechPipeline init
AppleSpeechRecognizer init
AppleWakewordRecognizer init
AudioController init
Apple SpeechPipeline start, context isActive false
AudioController startStreaming
AppleWakewordRecognizer startStreaming
AppleWakewordRecognizer prepareAudioEngine
AppleWakewordRecognizer startRecognition
AppleWakewordRecognizer createRecognitionTask
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Jell-O
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Jell-O nice
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Jell-O nice to
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Jell-O nice to meet
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Hello nice to meet
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Hello nice to meet you
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer hears Hello nice to meet you assistant
AppleWakewordRecognizer wakeword detected
Apple SpeechPipeline activate
AppleWakewordRecognizer stopStreaming
AppleWakewordRecognizer stopRecognition
AppleSpeechRecognizer startStreaming
AppleSpeechRecognizer prepareAudioEngine
AppleWakewordRecognizer recognitionTask resultHandler
AudioController audioRouteChanged reason: 3 notification: [AnyHashable("AVAudioSessionRouteChangeReasonKey"): 3, AnyHashable("AVAudioSessionRouteChangePreviousRouteKey"): <AVAudioSessionRouteDescription: 0x2831f8dc0, 
inputs = (
    "<AVAudioSessionPortDescription: 0x2831f89d0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Bottom>"
); 
outputs = (
    "<AVAudioSessionPortDescription: 0x2831f8c40, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>"
)>]
AudioController printAudioSessionDebug current category: AVAudioSessionCategoryPlayback options: 0 isOtherAudioPlaying: false bufferduration 0.02133333310484886
AudioController printAudioSessionDebug inputs: [<AVAudioSessionPortDescription: 0x2831e0bd0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>] preferredinput: nil input: [<AVAudioSessionPortDescription: 0x2831e42e0, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>] output: [<AVAudioSessionPortDescription: 0x2831ec5e0, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>] usb_outputs: []
AudioController audioRouteChanged new category: AVAudioSessionCategoryPlayback
AppleSpeechRecognizer wakeActiveMaxWorker
Apple SpeechPipeline deactivate
AppleSpeechRecognizer stopStreaming
AppleWakewordRecognizer startStreaming
AppleWakewordRecognizer prepareAudioEngine
AppleWakewordRecognizer startRecognition
AppleWakewordRecognizer createRecognitionTask
AppleSpeechRecognizer createRecognitionTask resultHandler
AppleSpeechRecognizer createRecognitionTask resultHandler error 216
Apple SpeechPipeline stop
AppleSpeechRecognizer stopStreaming
AppleWakewordRecognizer stopStreaming
AppleWakewordRecognizer stopRecognition
AudioController stopStreaming
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer createRecognitionTask resultHandler error timeout203
AppleWakewordRecognizer stopRecognition
AppleWakewordRecognizer startRecognition
AppleWakewordRecognizer createRecognitionTask
AppleWakewordRecognizer recognitionTask resultHandler
AppleWakewordRecognizer createRecognitionTask resultHandler error timeout203
AppleWakewordRecognizer stopRecognition
AppleWakewordRecognizer startRecognition
AppleWakewordRecognizer createRecognitionTask

It seams to me that the problem is related with wakeActiveMaxWorker but i don't know what it means. What else can i do to make this work on a physical device? Because it's only working on the simulator.