Main Idea: Combine Machine Learning with Sign Language to translate live sign language into English
For our project, we want to create an application (using a user's webcam feature) that allows people to show an ASL signal to the camera. Upon showing the hand symbol, the users will almost immediately be notified of the hand motion that they are displaying. For example, if the user showed the hand motion like this:
The application would output "Letter A." We would build our application so that it can read the ASL hand motions for many different things, including letters of the alphabet, numbers, specific words, and (maybe) even whole sentences.
Here is an example screen of our AI application:
Credit: Emaad Mir, Canva
Diagram of what is going on behind the scenes:
Credit: Ethan Tran, draw.io
Diagram of the steps behind this application:
Credit: Tay Kim, draw.io
Diagram of the frontend connection to the backend:
Credit: Anthony Bazhenov, draw.io
Other possible aspects of project
Allow users to upload videos of ASL and then transcribe the sign language into english and allow them to save the transcript
Use the webcam aspects of our camera to work with other topics, such as voice/facial recognition, analyzing environmental surroundings - maybe recognizing plants or animals.
Design Responsibilities
Name
Responsibility
Tay
Design user interaction steps for our program and website
Ethan T.
Design CSS Styling for website - getting things to look like our "style"
Emaad
Design frontend screens similar to what he already has but with more things
Anthony
Design frontend/backend connection using a diagram
Project Planning
Main Idea: Combine Machine Learning with Sign Language to translate live sign language into English
For our project, we want to create an application (using a user's webcam feature) that allows people to show an ASL signal to the camera. Upon showing the hand symbol, the users will almost immediately be notified of the hand motion that they are displaying. For example, if the user showed the hand motion like this:
The application would output "Letter A." We would build our application so that it can read the ASL hand motions for many different things, including letters of the alphabet, numbers, specific words, and (maybe) even whole sentences.
Here is an example screen of our AI application: Credit: Emaad Mir, Canva
Diagram of what is going on behind the scenes:
Credit: Ethan Tran, draw.io
Diagram of the steps behind this application: Credit: Tay Kim, draw.io
Diagram of the frontend connection to the backend:
Credit: Anthony Bazhenov, draw.io
Other possible aspects of project
Design Responsibilities
Design Asset Ownership