Closed GoogleCodeExporter closed 9 years ago
Hey Glass Team,
why isn't this issue priority set to high?
Original comment by eric.am...@gmail.com
on 9 May 2014 at 7:24
Agreeing with @eric.am... I believe this issue should have a "high" priority.
Original comment by coleor...@gmail.com
on 9 May 2014 at 4:32
I would like to add my name to that list for high priority.. Surgeons can't
swipe while working, voice command navigation is a must!
Original comment by gzr...@gmail.com
on 9 May 2014 at 4:33
Me as well working as a biology research scientist this is removing the
hands free nature of glass. No pin intended.
Original comment by rantonio...@gmail.com
on 9 May 2014 at 4:42
One of the main advantages of the Google Glass is the "hands free" feature.
We can't use it if we don't have "Contextual voice commands".
It is making my app totally useless...
It should be a high priority.
Original comment by moshe.sc...@gmail.com
on 10 May 2014 at 5:29
I agree this should be high priority. I'm working on Google Glass applications
for field engineers. These workers need their hands for other tools and
typically have their hands dirty. Voice control is a must-have.
Original comment by cedric.f...@gmail.com
on 10 May 2014 at 5:41
@eric: can you post a working version of your background listener code on
github?
Original comment by gzr...@gmail.com
on 11 May 2014 at 4:50
The stackoverflow thread solution:
http://stackoverflow.com/questions/23558412/speechrecognizer-broken-after-google
-glass-xe17-update-how-to-work-around
will only work if the possible voice commands are known from before-hand.
There does not seem to be a fix/workaround for LANGUAGE_MODEL_FREE_FORM .
Original comment by chris.mo...@incentro.com
on 12 May 2014 at 3:55
I consider this functionality integral both for consistency with existing
interface conventions and for the value proposition of the device itself.
I agree that it warrants a high priority.
Original comment by diminish...@gmail.com
on 14 May 2014 at 7:03
Is Google even reading this? It seems that the new update XE17 is not even
adding the previously voice controlled app to the OK glass menu and so this
device just became useless to make demo of several apps no yet submitted.
IMHO thats a deal killer to continue demo-ing this device and that does not
make me feel that this issue will be ever tackled given how Google handled
the explorers with closing voice command apps after making available for
the GDK release. What is the point really?
--
________________________________________________________________
ARCortex
Augmented Reality and complex interactive visualization systems consulting
Cell +14252245568
Skype yohanbaillot
LinkedIn www.linkedin.com/in/yohanbaillot
Resume www.simulation3d.biz/resume.pdf
Porfolio
Defense work
http://www.nrl.navy.mil/itd/imda/research/5581/augmented-reality
Commercial software www.dekko.co
Original comment by bail...@gmail.com
on 14 May 2014 at 7:24
Hello,
Thanks for your interest in this feature request! We are working hard to make
such an API available to our developers but this kind of work takes time to
perfect...
Please hang tight, we're listening to you and take your feedback into account!
For comment #60, what you're looking for is the development permission that is
now required for custom voice trigger:
* Release notes: https://developers.google.com/glass/release-notes
* Documentation: https://developers.google.com/glass/develop/gdk/starting-glassware#unlisted_commands
Best,
Alain
Original comment by ala...@google.com
on 14 May 2014 at 10:25
Thanks Alain!
On Wed, May 14, 2014 at 6:26 PM, null <google-glass-api@googlecode.com>
wrote:
Original comment by rantonio...@gmail.com
on 14 May 2014 at 10:31
Dear Alain,
Do you have an idea when a fix will be released ?
Thank you.
Original comment by guilla...@touchsurgery.com
on 15 May 2014 at 10:40
[deleted comment]
Hello Alain,
We thank you for your support. However, our project is blocked due to this
issue and it would be very helpful if you could provide us the timeline until
which it would be completed. I am sure that we are not the only ones waiting
for the feature.
Also for the RecognitionListener, we are not getting the Null pointer
exception, instead, 'onReadyFoSpeech' is called and then 'onError' is called
immediately with error=6. We tried everything but the Listener still fails.
Kindly suggest if it is the same issue.
Thank you in advance.
Original comment by nikhilpa...@gmail.com
on 20 May 2014 at 6:56
Hello Alain,
This update has WORKED!!! Thank you very much for your hard work!
Cheers,
Antonio
Original comment by rantonio...@gmail.com
on 20 May 2014 at 9:34
Hi Alain
First thanks for providing some feedback on whether this thread was being
considered and I am glad it is
You mention the new process to request voice commands to be submitted for
approval and I followed both links you mentioned and followed the link submit
a new command for
approval<https://developers.google.com/glass/distribute/index#choosing_a_voice_c
ommand_clear-both>
but I cannot see a mention of voice or an approval process there and
instead land on a "Distribute" page. Google-ing does not bring a page
either. I could spend some time looking around more but I thought that may
be it would help everyone better to provide the instructions to this page
here. Can you provide the correct URL?
thanks for your help
Yohan
--
________________________________________________________________
ARCortex
Augmented Reality and complex interactive visualization systems consulting
Cell +14252245568
Skype yohanbaillot
LinkedIn www.linkedin.com/in/yohanbaillot
Resume www.simulation3d.biz/resume.pdf
Porfolio
Defense work
http://www.nrl.navy.mil/itd/imda/research/5581/augmented-reality
Commercial software www.dekko.co
Original comment by bail...@gmail.com
on 20 May 2014 at 10:04
You can request new voice commands right here:
https://developers.google.com/glass/distribute/voice-form
Original comment by ala...@google.com
on 20 May 2014 at 11:42
Hi Glass Dev Team,
I would like to give you some feedback about the VoiceEngine, that I used by
extracting and adding the GlassVoice.apk, as described in
http://stackoverflow.com/questions/23558412/speechrecognizer-broken-after-google
-glass-xe17-update-how-to-work-around
First of all, CONGRATULATIONS.
It works super fast, recognizes the command told among the given set really
well. No delays, as with the free form recognizer, and it shows great
robustness to various accents!
We are definitely impatient to see those methods available to our apps. IT
WORKS GREAT AS IS!!
One weakness I have spotted is about numbers (simple digits): if the list of
commands you give is just "1, 2, 3, ... 9", it has a lot of trouble recognizing
them.
I tried to add "Number X" in front, but it's even worse: reading "Number two"
yields "Number eight" as result.
Please make the VoiceEngine available! With this set of imports, it's already
AWESOME!
import com.google.glass.input.VoiceInputHelper;
import com.google.glass.input.VoiceListener;
import com.google.glass.logging.FormattingLogger;
import com.google.glass.logging.FormattingLoggers;
import com.google.glass.voice.GlassSpeechRecognizer;
import com.google.glass.voice.VoiceCommand;
import com.google.glass.voice.VoiceConfig;
Have a great week-end!
Original comment by eric.am...@gmail.com
on 24 May 2014 at 2:37
Hi Eric
Could you please share a sample program so that I can try this method
Dileep
Original comment by dileepmo...@gmail.com
on 24 May 2014 at 5:11
Hi Dileep, just look on the stackoverflow page, @pscholl has posted a full
example on GitHub.
You can start an activity or not, both work.
Original comment by eric.am...@gmail.com
on 24 May 2014 at 5:57
Thank you very much Eric for responding. I saw @pscholl's thread but could not
find the Github link any where. I am searching for a solution since two days :(
Dileep
Original comment by dileepmo...@gmail.com
on 24 May 2014 at 6:06
It's here (mentioned in a comment) :
https://github.com/pscholl/glass_snippets/tree/master/esslib
Original comment by eric.am...@gmail.com
on 24 May 2014 at 6:08
Thank you very much Eric. You made my day
Original comment by dileepmo...@gmail.com
on 24 May 2014 at 6:13
You're welcome Dileep. @pscholl is the one to thank!
Original comment by eric.am...@gmail.com
on 24 May 2014 at 6:15
[deleted comment]
Hi Eric,
Did you manage to make it work with the digits (from 0 to 9) ?
I have been looking for this for a couple of days already !
Thank in advance.
Original comment by guilla...@touchsurgery.com
on 3 Jun 2014 at 2:42
I'm also very interested in
1) offline speech recognition
2) the ability to speak to choose from a pre-defined list of commands within an
immersion
Original comment by martynwinsen@gmail.com
on 4 Jun 2014 at 12:29
Hi Guillaume,
Unfortunately the Digits still don't work well on my side.
I'm waiting for an iteration on the Glass Team side:
when you try to dictate digits to Google on the search engine, it's always
accurate.
With XE16, I felt it was working better too, by saying "Number X".
I'm pretty confident that they'll fix this soon, so I wait :)
@Martyn: interested we are!!
Original comment by eric.am...@gmail.com
on 4 Jun 2014 at 2:32
I have issues with compiling @pscholl solution with Eclipse.
I have to use Gradle, which I don't really care for.
I do hope that there will be an "official" solution, because this one, although
works (thanks!), shouldn't be the end of this thread.
Glass team, we would love to hear about an official release date.
Original comment by moshe.sc...@gmail.com
on 5 Jun 2014 at 2:51
I really need an offline solution as well that works with multiple commands.
Original comment by hannings...@gmail.com
on 5 Jun 2014 at 10:08
This is now fixed for Immersions with XE18.1! Please read the full guide to
learn how it works:
https://developers.google.com/glass/develop/gdk/voice#contextual_voice_commands
The ApiDemo project has also been updated with a sample using this new feature:
https://github.com/googleglass/gdk-apidemo-sample
Make sure to update your GDK to the latest revision (currently 7).
Original comment by ala...@google.com
on 11 Jun 2014 at 1:41
Awesome Alain!
I'll try it asap and provide you with feedback. Congrats to the team!!
Original comment by eric.am...@gmail.com
on 11 Jun 2014 at 1:44
Hi Alain,
Is there a way to use the commands *without* showing the menu?
A set of lower-level APIs similar to the VoiceEngine classes, that would be
accessible programmatically: like the SpeechRecognizer, but with an array of
possible commands.
The use case is to keep the control of what's on screen,
plus to get commands *without* having to say "ok glass" everytime.
Thank you and have a great evening,
--Eric
Original comment by eric.am...@gmail.com
on 11 Jun 2014 at 1:55
Any way to use this on LiveCards (like Maps does)?
Original comment by keyboa...@gmail.com
on 11 Jun 2014 at 2:06
Hi Alain,
Thanks for the fix but I agree with Eric.
At the moment, we need to say "OK glass" every time, which will make the app
very clumsy to use with voice.
Furthermore, the fact that there is a menu that pops every time makes it worse.
Also, I added the voice command to an image view, which made the image itself
be darkened and non visible.
Regards,
Moshe
Original comment by moshe.sc...@gmail.com
on 15 Jun 2014 at 1:58
I added a new issue for using Contextual voice commands without "ok glass"
https://code.google.com/p/google-glass-api/issues/detail?id=544
Original comment by moshe.sc...@gmail.com
on 16 Jun 2014 at 8:24
Well done Moshe, I've voted for it
Original comment by eric.am...@gmail.com
on 16 Jun 2014 at 11:08
I added this issue:
https://code.google.com/p/google-glass-api/issues/detail?id=553&thanks=553&ts=14
03709099
Sometimes, after an activity with a voice menu calls another activity and that
activity is closed, the screen is blanked, instead of presenting the original
activity.
I think it's a bug but I'm not sure.
Original comment by moshe.sc...@gmail.com
on 25 Jun 2014 at 3:16
Original issue reported on code.google.com by
juan.tr...@intelygenz.com
on 29 Nov 2013 at 12:48