immersive-web / proposals

Initial proposals for future Immersive Web work (see README)
95 stars 11 forks source link

Define and query properties of "things" #54

Open LJWatson opened 4 years ago

LJWatson commented 4 years ago

One of the challenges yet to be solved for WebXR content, is how to make it accessible to someone who is unable to see it because they're blind or have very low vision. This might be described as the following user stories:

I'm blind and I want to know what "things" are in a WebXR.

I'm blind and I want to know the following things about a "thing" in WebXR:

There are undoubtedly many more properties for an XR thing that would be useful, but hopefully this gives you the idea.

At present we have a partial solution in the form of ARIA, and a bit more in the possible form of the AOM, but neither is close to a complete solution.

ARIA is a set of attributes designed to polyfill missing accessibility semantics in markup languages. If the WebXR space contains standard website/webapp UI components (form fields, tabpanels, tables etc.) then ARIA can be used to convey the right semantic information.

The trouble is that ARIA doesn't currently have the vocabulary to handle whatever might be contained within a WebXR space. It would also be impossible to create ARIA variants for everything and anything that might appear in WebXR, including things from pure imagination.

The AOM is a standard in early incubation, but one of its goals is to enable developers to create virtual nodes and branches in the browser's accessibility tree. One of the use cases it cites for this, is the creation of virtual fallback content for things like the canvas element, which has obvious potential for WebXR.

The AOM utilises ARIA though, so although it will have more flexibility than ARIA applied in the DOM, it will likely have the same limitations as ARIA itself.

So I think we need something new, and I'm posting this suggestion here for comment:

Aadjou commented 4 years ago

Thank you for sharing this and I hope that a discussion will follow.

My initial though is if a potential accessibility context is always per object/thing in the scene graph – or if there might be some broader context (eg. spatial coordinates or relationships between things) in webXR.

LJWatson commented 4 years ago

@Aadjou said:

My initial though is if a potential accessibility context is always per object/thing in the scene graph – or if there might be some broader context (eg. spatial coordinates or relationships between things) in webXR.>

Spacial coordinates seems like it would be a useful property.

Could you give me an example of what you mean by "relationships" though? I think this may be the thing I called "hierarchical context", where you'd be able to discover something like whether a thing was part of a collection (siblings), owned/contained by another thing (parent/child relationship) etc. but I'm not sure!

rdub80 commented 4 years ago

I would suggest capturing the "thing's" volume and physicality in its spatial context.

LJWatson commented 4 years ago

@rdub80 can you give some examples of what you mean by "physicality"?

LJWatson commented 4 years ago

Another property (or set of properties) would be the thing's state: was it on/off, on fire, hovering, or even more literal things like solid, gas, liquid etc.).

AdamSobieski commented 3 years ago

This could be relevant: https://cs.stanford.edu/people/ranjaykrishna/sgrl/index.html .