Open IssueSyncBot opened 1 year ago
wiwei
The tricky part about this one is the re-initialization of the speech providers needed to accomplish this (i.e. the set of keywords/voice commands are created at startup when new-ing up the keyword recognizer)
Mt-Perazim
is there any update on this feature?
polar-kev
This should be covered by #8310
Original issue opened by:
@wiwei
Filing on behalf of another, reference #21759184
In order to use the voice command stuff in the MRTK, you have to define the actions and keywords up front (i.e. in the profile/editor). These then get fed to the dictation/speech recognizers (i.e. the array of words). There's a request here to be able to change these up at runtime, to handle the case where dynamically created buttons can also get voice commands.
ISSUE MIGRATION
Issue migrated from: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5316