We want a way to collect data that is readable by our final ANN, that reads our coordinates and outputs whether you're thumbs upping, peacing, heart emoting, European footballer, or neither. Your ANN will not be able to read anything (probably) other than an array of coordinate values. So we want a CSV or something that has each sample in a row, with all of its coordinate values and a LABEL to indicate what it is.
Example below:
We can create something by making a training pipeline, which will do (roughly):
Pass in an image to Hand Landmarks Mediapipe
Get the coordinate values
Flatten the coordinate values
Add the label (European Footballer, etc.)
Put it into a csv
You can do this with a python script.
You also need to collect/forage for raw images. Potential Ways to do this:
Create a python script to collect and label data manually
Find out a way to programically scrape and label data on the internet
We want a way to collect data that is readable by our final ANN, that reads our coordinates and outputs whether you're thumbs upping, peacing, heart emoting, European footballer, or neither. Your ANN will not be able to read anything (probably) other than an array of coordinate values. So we want a CSV or something that has each sample in a row, with all of its coordinate values and a LABEL to indicate what it is.
Example below:
We can create something by making a training pipeline, which will do (roughly):
You can do this with a python script.
You also need to collect/forage for raw images. Potential Ways to do this: