Raised by @jasonjgw : Should gestures and graphical interactions that are closely connected with the natural language aspect be addressed? For example, if a user points to a location on a map and says here to make a request to a smart agent, the pointing gesture and the utterance are both central to the interaction. It seems artificial to regard the utterance processing as in scope but the gesture as outside the scope of this discussion.
Similarly, visual, auditory and haptic cues and alerts are not strictly part of the natural language aspect, but they are closely connected with it. Should natural language interfaces in context be discussed in a separate section
Raised by @jasonjgw : Should gestures and graphical interactions that are closely connected with the natural language aspect be addressed? For example, if a user points to a location on a map and says
to make a request to a smart agent, the pointing gesture and the utterance are both central to the interaction. It seems artificial to regard the utterance processing as in scope but the gesture as outside the scope of this discussion.Similarly, visual, auditory and haptic cues and alerts are not strictly part of the natural language aspect, but they are closely connected with it. Should natural language interfaces in context be discussed in a separate section