As a deaf user, I want to see patterns that display the spoken vowel, so that I can better connect with auditory stimulus and media.
Background
The AudioLux device was developed by CymaSpace, a nonprofit company in Portland owned by and makes products for the deaf and hard-of-hearing. It is meant to serve as an “audio visualizer,” as it is capable of turning sound into visual signals driven by an addressable LED strip.
The AudioLux has been worked on by 2 previous capstone teams. So far, the device is able to display pre-programmed patterns based off microphone or 3.5mm input. It also has a web app to precisely control the device itself over WiFi.
Why do we want it?
To further the AudioLux device’s stated goal: to allow deaf and hard of hearing individuals to more easily connect with auditory stimulus.
To align with the goals and interests of our stakeholders, namely the project partner.
Who is this for?
Deaf and hard of hearing individuals
Event Technicians
Do we have data to support it?
Vowel detection matches vocal formants to vowel formants. Our legacy codebase already has code to detect formants.
Requirements
Functional
The audio analyzer software must detect spoken vowels for both male and female speakers.
The audio analyzer software must correctly identify vowels 70% of the time.
The pattern displayed using the vowel information must directly convey what vowel is being spoken.
The pattern displayed must not have fast changes in light to prevent over stimulation, a problem in the deaf community.
Nonfunctional
The back end algorithm must be able to use both 2 and 3 dimensional input formants.
The back end algorithm will be implemented by using Nearest Neighbor.
The written code must be easy to understand and read.
The written code must be well-commented.
The wiki must be updated to describe how the audio analysis works after code completion.
Dependencies
This story is not blocked by or blocking other user stories.
To properly test written code, a functional development kit must be developed by the Electrical Engineering team.
Subtasks
[x] #50
[x] #51
Estimate
This user story should ideally be completed by the end of fall term, or at the very latest the middle of winter term.
Acceptance Criteria
[ ] When a vowel is held by a voice of either gender, the pattern will continuously display light features associated with that vowel.
[ ] When a word is spoken, the pattern will display light features associated with all vowels spoken 70% correctly.
Definition of Done
[ ] The feature has well-written inline documentation.
[ ] The feature has well-written wiki documentation.
[ ] The pattern has been tested to work with microphone input.
[ ] The pattern has been tested to work with PC audio input.
[ ] The feature meets all acceptance criteria.
[ ] The feature has been demoed to the project partner and, ideally, other stakeholders.
[ ] The feature has been merged with release code.
[ ] The feature is included in a minor or major release.
User Story Statement
As a deaf user, I want to see patterns that display the spoken vowel, so that I can better connect with auditory stimulus and media.
Background
The AudioLux device was developed by CymaSpace, a nonprofit company in Portland owned by and makes products for the deaf and hard-of-hearing. It is meant to serve as an “audio visualizer,” as it is capable of turning sound into visual signals driven by an addressable LED strip.
The AudioLux has been worked on by 2 previous capstone teams. So far, the device is able to display pre-programmed patterns based off microphone or 3.5mm input. It also has a web app to precisely control the device itself over WiFi.
Why do we want it?
Who is this for?
Do we have data to support it?
Requirements
Functional
Nonfunctional
Dependencies
Subtasks
Estimate
This user story should ideally be completed by the end of fall term, or at the very latest the middle of winter term.
Acceptance Criteria
Definition of Done