w3c / naur

https://w3c.github.io/naur/
Other
4 stars 1 forks source link

Scope question - KP #30

Open joshueoconnor opened 2 years ago

joshueoconnor commented 2 years ago

[Filed on behalf of @kimpatch]

Regarding the scope text:

"An example would be an interactive navigation tool that allows the user to issue spoken commands and to interact with a graphical map with a pointing device"

Some context about this that might be worth some thought: We've been talking about this multimodal interface example that allows the user to say "I want to go from here to here" on a map while pointing for many many years. I haven't ever actually seen this implemented. But also for many years I – and others – have been using a Wacom tablet pointing device at the same time as speech to do many things. I think of this as unaware multimodal – the user is using two modes of input at once but the two modes don't have to be aware of each other. This doesn't tend to be mentioned, but it is practical use that's happening now – mixed input. So every time I see something about this mythical interface I wonder why today's practical mixed input is so invisible

jasonjgw commented 2 years ago

The purpose of the example is simply to illustrate the scope of the issues addressed in the document. That is, such a multimodal application is within scope.

Whether it's widely available or not seems to me irrelevant to the scope question.

I suggest we leave it as is, or substitute a different example if anyone has a better candidate in mind.

joshueoconnor commented 2 years ago

+1 to Jason. However, there is a chicken and egg interaction model here (so to speak). Some users will only know that this mixed mode of interaction is possible when they are introduced to the possibility by someone else. I guess, we are indicating the potential in our scope.

jasonjgw commented 2 years ago

I propose we close this without changing the document.