MixedRealityToolkit / MixedRealityToolkit-Unity

This repository holds the third generation of the Mixed Reality Toolkit for Unity. The latest version of the MRTK can be found here.
BSD 3-Clause "New" or "Revised" License
401 stars 105 forks source link

Add voice commands dynamically In unity #319

Open IssueSyncBot opened 1 year ago

IssueSyncBot commented 1 year ago

Original issue opened by:

@wiwei @wiwei


Filing on behalf of another, reference #21759184

In order to use the voice command stuff in the MRTK, you have to define the actions and keywords up front (i.e. in the profile/editor). These then get fed to the dictation/speech recognizers (i.e. the array of words). There's a request here to be able to change these up at runtime, to handle the case where dynamically created buttons can also get voice commands.


ISSUE MIGRATION

Issue migrated from: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5316

IssueSyncBot commented 1 year ago

Original comment by:

@wiwei wiwei


The tricky part about this one is the re-initialization of the speech providers needed to accomplish this (i.e. the set of keywords/voice commands are created at startup when new-ing up the keyword recognizer)

IssueSyncBot commented 1 year ago

Original comment by:

@Mt-Perazim Mt-Perazim


is there any update on this feature?

IssueSyncBot commented 1 year ago

Original comment by:

@polar-kev polar-kev


This should be covered by #8310