Open subhendukundu opened 5 years ago
me too, onSpeechEnd is called automatically after voice recognition in Android.
+1 really need this feature, the time onSpeechEnd is too short.
Hey Guys,
Is that a new issue introduced in the latest build? We had the Android version of our app working fine while development, but now in the beta version, it's breaking just like the @subhendukundu described above.
Please help we need to make the app live asap.
Thanks Abhi
Hi Guys,
Any update on this, we are completely stuck with Android version of our app.. could you please get this resolved asap?
Thanks Abhi
I just ran the example code. The app is able to recognise even if the speech results is coming as errors, and its able to run for a long time.
I just ran the example code. The app is able to recognise even if the speech results is coming as errors, and its able to run for a long time.
@Jithinqw do you mean that it continues listening after long time of silence?
Sorry, the process is stopping. I have to call the process again !
@subhendukundu Did you solved your problem?
Still no update on this? I really need this fixed for android!
I don't think this problem will resolve, but i tried with some config below:
try { await Voice.start('es_US', { EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS: 30000 }); } catch (exception) { console.log(exception, 'exception'); }
It's work about 5-6 seconds before auto stop on Android.
Does anyone have insight on how to achieve this natively?
Did anyone solve this? For me, it stops listening right after onSpeechResults. If i call voice start at the end of onSpeechResults, since there is a lag, part of words spoken get missed. Would be great if someone could help. Have tried "EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS: 30000" but it did not work
These options don't work on my project too. Is there any solution for this issue.
EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS: 30000, EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS: 30000
@nikhilbhawsinka ni @safaiyeh sa @brkhrn brk I resolved this with SpeechToText NativeModules (in Android), you can try this solution.
Create file SpeechToTextModule.java in android/app/src/../apptest/SpeechToTextModule.java like this:
package com.test.apptest;
import android.app.Activity;
import android.content.Intent;
import android.speech.RecognizerIntent;
import android.util.Log;
import android.widget.Toast;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.Promise;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
import com.facebook.react.bridge.ReactMethod;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
public class SpeechToTextModule extends ReactContextBaseJavaModule {
private static final String DURATION_SHORT_KEY = "SHORT";
private static final String DURATION_LONG_KEY = "LONG";
private final int SPEECH_REQUEST_CODE = 123;
private Promise mPickerPromise;
public SpeechToTextModule(ReactApplicationContext reactContext) {
super(reactContext);
reactContext.addActivityEventListener(mActivityEventListener);
}
@Override
public String getName() {
return "SpeechToText"; // name to export native module
}
@Override
public Map<String, Object> getConstants() {
final Map<String, Object> constants = new HashMap<>();
constants.put(DURATION_SHORT_KEY, Toast.LENGTH_SHORT);
constants.put(DURATION_LONG_KEY, Toast.LENGTH_LONG);
return constants;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent data) {
switch (requestCode) {
case SPEECH_REQUEST_CODE: {
if (resultCode == Activity.RESULT_OK && null != data) {
ArrayList<String> result = data
.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
mPickerPromise.resolve(result.get(0));
}
break;
}
}
}
};
@ReactMethod
public void speak(final Promise promise) {
Activity currentActivity = getCurrentActivity();
if (currentActivity == null) {
mPickerPromise.reject("Your device is not supported!");
mPickerPromise = null;
return;
}
mPickerPromise = promise;
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en_US");
try {
this.getReactApplicationContext().addActivityEventListener(mActivityEventListener);
currentActivity.startActivityForResult(intent, SPEECH_REQUEST_CODE);
} catch (Exception e) {
mPickerPromise.reject("Your device is not supported!");
mPickerPromise = null;
}
}
}
Then create file ModuleSTT.java in android/app/src/../apptest/ModuleSTT.java like this:
package com.test.apptest;
import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class ModuleSTT implements ReactPackage {
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Collections.emptyList();
}
@Override
public List<NativeModule> createNativeModules(
ReactApplicationContext reactContext) {
List<NativeModule> modules = new ArrayList<>();
modules.add(new SpeechToTextModule(reactContext));
return modules;
}
}
Next, import ModuleSTT in MainApplication.java like this: import com.test.apptest.ModuleSTT; In getPackages function, add this: packages.add(new ModuleSTT()), then react-native run-android.
Next, create file SpeechToText.js in your project like this content:
import { NativeModules } from 'react-native';
module.exports = NativeModules.SpeechToText;
Code js:
import SpeechToText from './SpeechToText.js'
if (Platform.OS === 'android') {
await SpeechToText.speak()
.then(async response => {
console.log(response, 'response speech');
await this.setState({ result: response, keySearch: response });
// do anything you want
})
.catch(error => {
console.log(error);
});
}
Sorry about my English and hope help you resolve this.
@anhnd11 could you create a PR with these changes
Hi @anhnd11, I tried but the recogniser still shuts off after a couple of seconds. Can you please let me know what needs to be done here?
I have somehow hacked a solution, not sure if its a fit all thing. What you can do is call _startRecognizing, in loop, this will stop it from stopping, but create an irritating beeping noise. To mute that noise, use:
AudioManager mAudioManager =(AudioManager) this.reactContext.getSystemService(Context.AUDIO_SERVICE); mAudioManager.adjustStreamVolume(AudioManager.STREAM_NOTIFICATION, AudioManager.ADJUST_MUTE,0);
I have somehow hacked a solution, not sure if its a fit all thing. What you can do is call _startRecognizing, in loop, this will stop it from stopping, but create an irritating beeping noise. To mute that noise, use:
AudioManager mAudioManager =(AudioManager) this.reactContext.getSystemService(Context.AUDIO_SERVICE); mAudioManager.adjustStreamVolume(AudioManager.STREAM_NOTIFICATION, AudioManager.ADJUST_MUTE,0);
Where do we write this? nikhilbhawsinka
I am not sure if these are the correct behaviours or not but the events work a bit different in iOS and Android. In Android the module stop listening to voice after calling the following methods
Is it possible to make the module listening without making it stop?
Tested Android versions: 9.0.5, 8.0