Shared-Reality-Lab / IMAGE-browser

IMAGE project browser extensions & client-side code
Other
2 stars 0 forks source link

Investigate the use of textures #103

Open sriGanna opened 2 years ago

sriGanna commented 2 years ago

This would be for the active exploration (passive guidance) mode.

There are two parts: 1) Trying different textures to determine if we can make multiple distinguishable textures 2) Investigate how we communicate the information that each texture represents

To close this issue we either: 1) conclude that we can't render multiple distinguishable textures, OR 2) Provide a list of at least 2 distinguishable textures

jeffbl commented 2 years ago

I'm putting this in backlog, but if the planned CSUN renderings will include this functionality, please move it to a closer milestone (e.g., CSUN feature freeze)

rayanisran commented 2 years ago

Given how the 2DIY guidance control (on the web?) seems tricky to fine-tune for the initial experience we created, let alone for any future sophisticated setup procedures that may end up being a hassle for blind users (such as initial calibration), I think post March 31 might be a good time to look into this, since the 2DIY is far less likely to break in exploration mode. We could revisit the physics engine idea, where multiple distinct textures could be generated by various effects such as damping or force-feedback walls (those who did or tested the CanHaps course labs might better understand what I'm trying to describe). I'm unsure how compelling such an experience might be for a blind user, especially a congenitally blind individual, but could we create a "simple" experience to help with say, providing them some sense of spatial awareness of objects or regions in a photo?