waleedAhmad1 / google-glass-api

Automatically exported from code.google.com/p/google-glass-api
0 stars 0 forks source link

Support for Contextual voice commands in GDK #273

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Create a LiveCard with the GDK 
https://developers.google.com/glass/develop/gdk/ui/live-cards

What is the expected output? What do you see instead?
I would expect an API, similar to the LiveCard.setAction(PendingIntent), to 
specify a contextual menu when I say "ok glass" on my LiveCard, as in the Maps 
LiveCard: "Show route overview"...

What version of the product are you using? On what operating system?
Google Glass - XE11

Please provide any additional information below.

Original issue reported on code.google.com by juan.tr...@intelygenz.com on 29 Nov 2013 at 12:48

GoogleCodeExporter commented 9 years ago

Original comment by allev...@google.com on 2 Dec 2013 at 5:10

GoogleCodeExporter commented 9 years ago

Original comment by ala...@google.com on 2 Dec 2013 at 5:41

GoogleCodeExporter commented 9 years ago
Issue 245 has been merged into this issue.

Original comment by ala...@google.com on 3 Dec 2013 at 12:05

GoogleCodeExporter commented 9 years ago
Issue 254 has been merged into this issue.

Original comment by ala...@google.com on 3 Dec 2013 at 12:12

GoogleCodeExporter commented 9 years ago
Broadening the title of this bug so that we can use it for voice commands both 
on live cards and in activities.

Original comment by allev...@google.com on 6 Jan 2014 at 11:44

GoogleCodeExporter commented 9 years ago
Any idea when this will be supported? Don't want an exact timeframe, but more 
like sooner than later?

Original comment by deric.wa...@gmail.com on 6 Jan 2014 at 11:50

GoogleCodeExporter commented 9 years ago
Could this also be expanded to support contextual voice commands immediately 
after the launch phrase? For example, I'd like my user to be able to say (from 
the home card) "Ok glass...<my custom voice trigger>...<a contextual voice 
item>". From my understanding, I would currently have to start an activity, and 
then ask the user to say "ok glass" again, which seems a bit cumbersome.

What I'd like to do is restrict my user to a few pre-determined commands within 
my app instead of polluting the collection of device-wide voice triggers with a 
separate entry for each sub-action in my app.

Original comment by trogdor3...@gmail.com on 22 Jan 2014 at 6:14

GoogleCodeExporter commented 9 years ago
#7 I think that what you want to achieve is already implemented in XE12. Have a 
look in the documentation: 
https://developers.google.com/glass/develop/gdk/input/voice?hl=fr#starting_glass
ware
There is no double "ok glass".

Original comment by sarra...@icare.ch on 22 Jan 2014 at 8:04

GoogleCodeExporter commented 9 years ago
#8 Thanks for the suggestion, but what i'm trying to accomplish is similar to 
what currently happens when the user says "ok glass...send a message" and Glass 
presents them with an explicit list of people's names that they can say.

Original comment by trogdor3...@gmail.com on 23 Jan 2014 at 2:00

GoogleCodeExporter commented 9 years ago
sarra - I'd like how the "send a message" app does it, where you can 
programatically build a list of specific voice commands without requiring the 
cloud voice recognition to pick it up.

"ok glass"
"send a message"
"john smith" but not "purple monkey dishwasher"

Original comment by bage...@gmail.com on 23 Jan 2014 at 2:20

GoogleCodeExporter commented 9 years ago
#10 yes, this is precisely what I was trying to convey

Original comment by trogdor3...@gmail.com on 23 Jan 2014 at 3:14

GoogleCodeExporter commented 9 years ago
In testing, I took the compass sample and made some tweaks.  When I installed 
the apk, and said "show compass" I was presented with two options, one for each 
app_name.  I'm not sure if similar could be accomplished by configuring 
multiple labels for activities in the same apk.  It wouldn't be as full a 
solution as I really want, but it might be a way to get a small set of 
predefined commands in there.

Original comment by bage...@gmail.com on 23 Jan 2014 at 7:12

GoogleCodeExporter commented 9 years ago
#10, this is exactly what I need as well. Triggering the voice recognition 
activity every time the user uses the touch pad gestures is hard if there hands 
are working with tools. Contextual voice commands would allow for the user to 
keep using the application without resorting to non-voice input. Maybe 
decompiling the existing on glass apps using the provided tools here would 
uncover how Google does it?

https://github.com/jaredsburrows/OpenQuartz/tree/master/glass-source

Original comment by Motiejus...@gmail.com on 24 Jan 2014 at 4:14

GoogleCodeExporter commented 9 years ago
Any update on when this will be released?  I'd really like my users to have 
voice command control of my app (pretty much the exact same reason as #13's 
comment).

Original comment by jennifer...@gmail.com on 27 Jan 2014 at 10:47

GoogleCodeExporter commented 9 years ago
When will this be implemented?

Original comment by ryan.kop...@gmail.com on 29 Jan 2014 at 10:32

GoogleCodeExporter commented 9 years ago
is it possible to implement 'ok glass' command in my google glass apps, as 
feature available in native livecard.

I'm trying with various way, but some how not possible, so could you add this 
as well.

Original comment by rkjhaw1...@gmail.com on 31 Jan 2014 at 10:29

GoogleCodeExporter commented 9 years ago
Absence of contextual voice commands is biggest frustration toward GDK I have 
so far. It is not trivial for user to switch between talking to device and 
touching it in order to achive simple operation.

Original comment by ojosdeg...@gmail.com on 1 Feb 2014 at 4:30

GoogleCodeExporter commented 9 years ago
I agree, my immersive application is in dire need of contextual voice commands 
before I can show it to my clients

Original comment by gzr...@gmail.com on 2 Feb 2014 at 5:11

GoogleCodeExporter commented 9 years ago
#17 is right.  A hands free operation with internal voice commands is paramount 
to the success of glass.  And it must be done without wifi connections and a 
cloud based VoiceRecognizer.  Please support this!

Original comment by ben.ba...@ymail.com on 20 Feb 2014 at 5:34

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
Yes, please provide offline speech recognition API where developer can pass a 
JSGF file for context sensitive recognition.

Original comment by rha...@gmail.com on 20 Feb 2014 at 5:56

GoogleCodeExporter commented 9 years ago
I also need a speech recognition that works offline with a list of possible 
commands. Please can you tell us when it will be available (if it's more than a 
month we will have to find an alternative). Thank you very much in advance

Original comment by diamp93 on 12 Mar 2014 at 1:02

GoogleCodeExporter commented 9 years ago
So does XE16 now support contextual voice commands without requiring a separate 
voice recognizer intent? 
Also, offline voice recognition, KitKat has it, does XE16? 

Original comment by srina...@gmail.com on 16 Apr 2014 at 2:52

GoogleCodeExporter commented 9 years ago
Doubtful, Glass needed to move to Kitkat, now that's it's done.. I think we 
need to wait for the Android wear to rollout

Original comment by gzr...@gmail.com on 16 Apr 2014 at 4:05

GoogleCodeExporter commented 9 years ago
Good news is that the SpeechRecognizer can be used directly now. So you can 
implement your own contextual voice commands without having to trigger the 
SpeechRecognizer Activity. 

The down side is that it still dosen't support offline voice recognition.

Original comment by dkruzic on 17 Apr 2014 at 10:55

GoogleCodeExporter commented 9 years ago
Sweet! That's a huge leap in app usability already. Thanks for the update. 

Offline voice recognition is very important too - still need that.

Original comment by srina...@gmail.com on 18 Apr 2014 at 12:57

GoogleCodeExporter commented 9 years ago
dkruzic

Can you please share more details on that feature update? A sample code snippet 
would be hugely helpful. 

Original comment by glasspan...@gmail.com on 18 Apr 2014 at 2:39

GoogleCodeExporter commented 9 years ago
Hi everyone,

Here is some code example that works:
            mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(this); 
            mRecognitionListener = new AbstractMainRecognitionListener();
            mSpeechRecognizer.setRecognitionListener( mRecognitionListener );
            mSpeechIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); 
            //mSpeechIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en-US");             // i18n
            mSpeechIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
            mSpeechIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true);
            mSpeechIntent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 100);                // To loop every X results

EXTRA_XXX are optional.

You need to implement a RecognitionListener
public class AbstractMainRecognitionListener implements RecognitionListener
{...}

------------------------

On #273 itself:

We badly need OFFLINE recognition on a LIMITED SET of commands.

Currently the free form recognition gives a lot of fancy transcriptions, 
whereas it would be far more efficient that the recognizer finds a match among 
the command set (like the home screen)

Original comment by eric.am...@gmail.com on 18 Apr 2014 at 2:50

GoogleCodeExporter commented 9 years ago
And of course (sorry for forgetting that)

mSpeechRecognizer.startListening( mSpeechIntent );

to effectively start the listening

PS: why do we get these f*** captcha???

Original comment by eric.am...@gmail.com on 18 Apr 2014 at 2:53

GoogleCodeExporter commented 9 years ago
Thanks Eric! Do you think we need to embed this code inside a gesture detector 
say tap or double tap or does it listen for speech commands automatically at 
any point of time?
I am looking for something that would allow the glass to take speech inputs at 
any point of time during the application lifecycle.  

Original comment by glasspan...@gmail.com on 18 Apr 2014 at 3:55

GoogleCodeExporter commented 9 years ago
You're welcome :)

What I do is to start listening either on double-tap, or when the user raises 
her head (like the wake-up feature, but at a lower angle, around -15 deg)

Original comment by eric.am...@gmail.com on 18 Apr 2014 at 4:00

GoogleCodeExporter commented 9 years ago
Thats awesome Eric! I was actually looking for gestures similar to head wake up 
feature but I couldnt find one on the glass developer reference. All I see is 
for tap, double tap, fingers changed etc etc. Where did you get the code for 
the head wake up gesture?

Original comment by glasspan...@gmail.com on 18 Apr 2014 at 4:13

GoogleCodeExporter commented 9 years ago
@glasspan 
I used the accelerometer sensor. I don't have the code right now but I could 
supply it later.

I'd like the speech recognizer to start with onCreate. Is this possible on 
Glass now or do I still need a manual trigger?

Original comment by Han...@gmail.com on 18 Apr 2014 at 4:52

GoogleCodeExporter commented 9 years ago
@glasspan look at the compass code example, there is all you need

@Han  I create it in onCreate, but I start it in onResume(), that's better to 
have it on, even after sleep

Original comment by eric.am...@gmail.com on 18 Apr 2014 at 5:38

GoogleCodeExporter commented 9 years ago
Hey,

Has anyone succeeded in implementing a speech recognition offline then ?

Original comment by guilla...@touchsurgery.com on 20 Apr 2014 at 9:42

GoogleCodeExporter commented 9 years ago
I have successfully implemented the speech recognizer to work without a UI. My 
only problem is that whenever the application comes back from being in the 
background, the speech recognizer does not work anymore. If I try creating a 
new one, it says it cannot bind.

I am new to android development (I usually do iOS) and I am confused on how to 
properly handle intents and recognizers, etc when the application goes into the 
background. Is anyone else having this problem? I tried googling solutions to 
no avail.

Thanks!

Original comment by tannerne...@gmail.com on 22 Apr 2014 at 4:11

GoogleCodeExporter commented 9 years ago
@tannerne is there a certain error message you're receiving? Does the 
application continue to run, or does it give an error code? 

I'm also occasionally running into RecogListen errors and I'm currently trying 
to debug that.

Original comment by Han...@gmail.com on 29 Apr 2014 at 5:34

GoogleCodeExporter commented 9 years ago
To implement offline speech recognition I believe you have to Root your glass 
and follow the instructions for a typical android device.  I have speech 
running as a continuous service in my Glass app but I keep getting Error = 5 
(other client) when running online.  I have a rtsp video feed displaying and 
lots of other network traffic and can't afford to wait for the google 
translation service.  I desperately need to have off line speech working within 
the week for a demo.  One thing I have noticed is that when glass heats up 
everything stops working.  Do I HAVE to root my glass to make offline speech 
work?  Please help....  Ben

Original comment by ben.ba...@ymail.com on 29 Apr 2014 at 6:10

GoogleCodeExporter commented 9 years ago
There's no need to root if you only need the google voice like nearest match 
recognition. Search for "voiceinputhelper voiceconfig stackoverflow. You need 
to use internal api's though.

It is absolutely necessary that this becomes part of the SDK.

Original comment by dkruzic on 1 May 2014 at 4:18

GoogleCodeExporter commented 9 years ago
Thank you dkruzic.  That was exactly what I needed.  I had to strip out most of 
the jar class files so I could get it into a deployable state, but it works 
great.

Original comment by ben.ba...@ymail.com on 2 May 2014 at 6:47

GoogleCodeExporter commented 9 years ago
@ben.ba Could you please share your jar file? I've been struggling with this 
issue for a very long time. Thanks, Chris

Original comment by cesische...@gmail.com on 2 May 2014 at 8:55

GoogleCodeExporter commented 9 years ago
@eric.am...@gmail.com If one were to implement the code in comment #28 in 
SomeActivity.java onCreate(), how would one get access to the 
AbstractMainRecognitionListener callbacks like onResults() from inside of 
SomeActivity.java?

Thanks in advance.

Original comment by coleor...@gmail.com on 5 May 2014 at 2:39

GoogleCodeExporter commented 9 years ago
@coleor AbstractMainRecognitionListener is a class you define yourself, 
implementing RecognitionListener. 
You define the class and create the object, so you have everything you need to 
call SomeActivity.java from the callbacks

Original comment by eric.am...@gmail.com on 7 May 2014 at 11:39

GoogleCodeExporter commented 9 years ago
Hi everyone,

It looks like XE17 update has broken my code. Does anyone see the same?

E/AndroidRuntime(6321): FATAL EXCEPTION: main
05-07 15:06:52.454: E/AndroidRuntime(6321): Process: com.google.glass.voice, 
PID: 6321
05-07 15:06:52.454: E/AndroidRuntime(6321): java.lang.NullPointerException: 
VoiceEngine.startListening: voiceConfig cannot be null
05-07 15:06:52.454: E/AndroidRuntime(6321):     at 
com.google.glass.predicates.Assert.assertNotNull(Assert.java:68)
05-07 15:06:52.454: E/AndroidRuntime(6321):     at 
com.google.glass.voice.VoiceEngine.startListening(VoiceEngine.java:650)
05-07 15:06:52.454: E/AndroidRuntime(6321):     at 
com.google.glass.voice.VoiceService$VoiceServiceBinder.startListening(VoiceServi
ce.java:116)
05-07 15:06:52.454: E/AndroidRuntime(6321):     at 
com.google.glass.voice.GlassRecognitionService.attachCallback(GlassRecognitionSe
rvice.java:272)
05-07 15:06:52.454: E/AndroidRuntime(6321):     at 
com.google.glass.voice.GlassRecognitionService.onStartListening(GlassRecognition
Service.java:216)
...

Original comment by eric.am...@gmail.com on 7 May 2014 at 11:41

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
@eric.am

I'm getting the same errors :(

05-08 17:06:28.980: E/AndroidRuntime(5695): FATAL EXCEPTION: main
05-08 17:06:28.980: E/AndroidRuntime(5695): Process: com.google.glass.voice, 
PID: 5695
05-08 17:06:28.980: E/AndroidRuntime(5695): java.lang.NullPointerException: 
VoiceEngine.startListening: voiceConfig cannot be null
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.google.glass.predicates.Assert.assertNotNull(Assert.java:68)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.google.glass.voice.VoiceEngine.startListening(VoiceEngine.java:650)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.google.glass.voice.VoiceService$VoiceServiceBinder.startListening(VoiceServi
ce.java:116)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.google.glass.voice.GlassRecognitionService.attachCallback(GlassRecognitionSe
rvice.java:272)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.google.glass.voice.GlassRecognitionService.onStartListening(GlassRecognition
Service.java:216)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
android.speech.RecognitionService.dispatchStartListening(RecognitionService.java
:98)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
android.speech.RecognitionService.access$000(RecognitionService.java:36)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
android.speech.RecognitionService$1.handleMessage(RecognitionService.java:79)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
android.os.Handler.dispatchMessage(Handler.java:102)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
android.os.Looper.loop(Looper.java:149)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
android.app.ActivityThread.main(ActivityThread.java:5061)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
java.lang.reflect.Method.invokeNative(Native Method)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
java.lang.reflect.Method.invoke(Method.java:515)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:794)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
com.android.internal.os.ZygoteInit.main(ZygoteInit.java:610)
05-08 17:06:28.980: E/AndroidRuntime(5695):     at 
dalvik.system.NativeStart.main(Native Method)
05-08 17:06:28.988: I/Process(5695): Sending signal. PID: 5695 SIG: 9

Original comment by coleor...@gmail.com on 8 May 2014 at 10:09

GoogleCodeExporter commented 9 years ago
This is also happening for me! SpeechRecognizer has been broken.

Original comment by rantonio...@gmail.com on 8 May 2014 at 10:57

GoogleCodeExporter commented 9 years ago
:(

Hopefully we'll have a alternative available documented in the XE17 release 
notes (once they're updated)

Original comment by coleor...@gmail.com on 8 May 2014 at 11:06

GoogleCodeExporter commented 9 years ago
I am not getting such exception after updating XE17 for SpeechRecognizer but 
startListening is not working, while Its working fine in older glass(XE16)

Original comment by eshankarprasad on 9 May 2014 at 6:08

GoogleCodeExporter commented 9 years ago
I've summarized all my findings on a stackoverflow thread here:
http://stackoverflow.com/questions/23558412/speechrecognizer-broken-after-google
-glass-xe17-update-how-to-work-around

The callback is the only missing piece.

Please contribute so we find a work around together!

Original comment by eric.am...@gmail.com on 9 May 2014 at 7:21