Three-dimensional printed models have the potential to serve as powerful accessibility tools for blind people. Recently, researchers have developed methods to further enhance 3D prints by making them interactive: when a user touches a certain area in the model, the model speaks a description of the area. However, these interactive models were limited in terms of their functionalities and interaction techniques. We conducted a two-section study with 12 legally blind participants to fill in the gap between existing interactive model technologies and end users' needs, and explore design opportunities. In the first section of the study, we observed participants' behavior as they explored and identified models and their components. In the second section, we elicited user-defined input techniques that would trigger various functions from an interactive model. We identified five exploration activities (e.g., comparing tactile elements), four hand postures (e.g., using one hand to hold a model in the air), and eight gestures (e.g., using index finger to strike on a model) from the participants' exploration processes and aggregate their elicited input techniques. We derived key insights from our findings including: (1) design implications for I3M technologies, and (2) specific designs for interactions and functionalities for I3Ms.
This paper aims to understand blind people’s needs and preferences for interactivity in 3D printed models by investigating two research questions:
RQ1: How do blind people explore tactile models (that are not interactive)?
RQ2: What interaction techniques are effective in I3Ms?
Study
Goal
Draw design implications from
(1) blind people’s exploration behaviors
(2) user-defined input techniques.
Participants
12 legally blind (female = 8)
23 to 60 years (mean = 40.75, SD = 13.15)
low vision = 1, college graduate = 8, Braille reader = 11 (did not read regularly = 4), iPhone user = 11
Procedure
One session (60 min): two sections
Exploration: performing one task for each model
Task 1. Identify the Model.
Task 2. Describe the Shape of an Element.
Task 3. Describe the Shapes of Nearby Elements.
Requirements:
Think aloud and explain what they were feeling and doing.
Elicitation: researchers prompt a user with the effects of an action (Referent, or functions), and the user is expected to provide the causes of the action (Sign, or input technique).
Referent 1. Get General Model Information.
Referent 2. Select an Element and Get its Name.
Referent 3. Select a Sub-Area of an Element and Get its Name.
Referent 4. Get More Information.
Referent 5. Record Note.
Referent 6. Retrieve Notes.
Analysis
Exploration section
Code the video clips using a digital note card (1. a static frame, 2. an identification code, 3. text explaining the frame, 4. related dialog)
Cluster the note cards into groups based on hand postures and gestures
Identify the theme of each group
Elicitation section
Sort suggested input
Calculate the Max-Consensus and Consensus-Distinct Ratio for each Referent
Develop themes from the transcription using axial coding
Tactile information needs to be made clear in 3D models
Design different models that contain different levels of detail
Controllable and Changeable Digital Content
One potential issue of I3M is keeping providing unnecessary audio output while a user explores the model. The users need to be in control of the information they are accessing
Modes can be designed to change content and avoid overwhelming information. E.g.
On/off audio output
Different levels of information: nation/state/city mode
Give/guess information
Supporting Exploration Behaviors
Tabletop exploration: A stable base
Midair exploration: Grabbing posture and a handle
Learnable and Distinguishable Gestures
Pointing, Striking, and Following gestures are intuitive ways in communication activities to inquire about information
I3Ms should be able to distinguish deliberate gestures from exploration behaviors to avoid confusion
Interaction Techniques for Small Models
Combine different interaction techniques to overcome size constraints. E.g. Speech input for further information
Links
Abstract
Three-dimensional printed models have the potential to serve as powerful accessibility tools for blind people. Recently, researchers have developed methods to further enhance 3D prints by making them interactive: when a user touches a certain area in the model, the model speaks a description of the area. However, these interactive models were limited in terms of their functionalities and interaction techniques. We conducted a two-section study with 12 legally blind participants to fill in the gap between existing interactive model technologies and end users' needs, and explore design opportunities. In the first section of the study, we observed participants' behavior as they explored and identified models and their components. In the second section, we elicited user-defined input techniques that would trigger various functions from an interactive model. We identified five exploration activities (e.g., comparing tactile elements), four hand postures (e.g., using one hand to hold a model in the air), and eight gestures (e.g., using index finger to strike on a model) from the participants' exploration processes and aggregate their elicited input techniques. We derived key insights from our findings including: (1) design implications for I3M technologies, and (2) specific designs for interactions and functionalities for I3Ms.
3Dプリンタが普及したことで,3次元的なタッチデバイスを製作することが可能になったので,それらを用いた視覚障害者向けの入力手法はどのようなものがあるか検討した研究