Open matatk opened 4 years ago
This is excellent stuff for discussion/addition in a next draft about some of the subtleties of translating affordances from one modality to another!
This was discussed by Research Questions Task Force https://www.w3.org/2020/06/10-rqtf-minutes.html#item02
Needs further discussion - the idea of adding examples as well was well received, but the second part also brought up specific the issue around how to support multiple modalities, how they will interact - how to ensure that modality abstractions are in sync - and what happens in one is translated and bubbles into another. Great stuff @matatk
@matatk Discussed in RQTF again - action on Josh to draft new requirement to ensure that actions undertaken in modality abstractions are in sync -
@matatk I've added a new bullet here "How can we ensure what happens in one modality, is update in another so various abstractions are not out of sync? e.g. synchronization of captions between real time text transcriptions and other alternatives such as symbols or AAC?"
I think section 3.9 makes some great points. I wonder if this is an appropriate place for some examples.
Do I understand correctly that the first point is about things that in the real world may be inherently accessible affordances, like tactile paving, or the presence of a handle or a plate on a door, but we are trying to figure out how to expose those in a natural way to the XR user?
The last point is important and evokes two different situations for me - are these the ones you are describing and/or, if not, do you think either warrants mentioning?
Situations in which an underlying interaction is presented in a certain way, because it works for most people. But then, assistive technology is layered on top of that interaction in its rendered modality, but it would've been simpler for the user to have had a different representation.
A classic example of this is drag-and-drop, which is frequently represented visually on web pages. It is possible to use ARIA etc. to convey things like draggable things and drop zones (making the problem 2D in some cases), but in many cases it may've been simpler to present the problem as specifying the order of items in a list, which can be done as a simple 1D problem with numbers or up/down keystrokes/buttons as inputs.
This is a problem of adapting a particular rendering of the problem rather than the underlying problem.
Situations in which an adaptation has been carried out to transform one modality into another, to make it accessible, but we need a way to convey the user's interaction in the adapted modality back into the original one.
E.g. reading a book in XR may be presented visually, with page-turning gestures required, but if a screen-reader is being used to read the text, keyboard interactions may be more suitable for the user.
This is a problem of bridging between AT and content, and may imply the need for an accessibility layer for XR.
This needs to have the XAUR label, but I don't have permissions to make that change on this repo.
Original feedback email, for reference