Open AdaRoseCannon opened 8 months ago
Mentioned in an editors meeting: There's a possibility that this information could also be used as a generic input assist, where we could start surfacing which semantic object a target ray intersected with select events. This could make some types of inputs easier for developers.
Right now rendered scenes are pretty opaque. They are hard to parse by machines to extract information about what is being shown and where it is in 3D space.
I would like to propose a solution where we have an object graph created by the user and attached to an entry point on the session each object is assigned a colour. And a stencil buffer where these colours are rendered so that the device knows what is on the scene.
/facetoface