ieeeuoft / mog-detector

0 stars 0 forks source link

[Gesture Recognition] Create training pipeline #4

Closed Zazzscoot closed 1 month ago

Zazzscoot commented 1 month ago

We want a way to collect data that is readable by our final ANN, that reads our coordinates and outputs whether you're thumbs upping, peacing, heart emoting, European footballer, or neither. Your ANN will not be able to read anything (probably) other than an array of coordinate values. So we want a CSV or something that has each sample in a row, with all of its coordinate values and a LABEL to indicate what it is.

Example below: Image

We can create something by making a training pipeline, which will do (roughly):

  1. Pass in an image to Hand Landmarks Mediapipe
  2. Get the coordinate values
  3. Flatten the coordinate values
  4. Add the label (European Footballer, etc.)
  5. Put it into a csv

You can do this with a python script.

You also need to collect/forage for raw images. Potential Ways to do this:

Zazzscoot commented 1 month ago

done, need to test