A computer vision based yoga coach web app
Yoga Coach web is a web app that allows you to record your body pose with your webcam and will give you real-time feedback how good your doing on a certain exercise based on coputer vision analysis. This app was developed as my final project during the data science bootcamp at Spiced Academy in Berlin. If you don't want to setup by yourself, you can have a look at the screenshots or the presentation.
The picture in this screenshot was taken from commons.wikimedia.org on 03/10/2022.
I thank all people from Spiced Academy and Bundesagentur für Arbeit that supported me. Images for exercises were taken from commons.wikimedia.org. The respective URL and date of access are shown in the app. Descriptions of exercises are taken from the en.wikipedia.org JSON API and a respective link given to further read the article. The image or video stream processing was followed from the MediaPipe Pose python API and this pyshine tutorial. The script for plotting of body pose landmarks is based on these two threads 1, 2. The angles were calculated following this thread.
1) Go to folder you want to have it, e.g. home with: cd ~/
2) Clone the repository: git clone https://github.com/klmhsb42/yoga_coach_web.git
3) Go inside the repository folder: cd yoga_coach_web
4) Create a virtual envrionment e.g. with Anaconda: conda create --name yogacoach
5) Activate the virtual envrionment: conda activate yogacoach
6) Update pip in the virtual envrionment: python -m pip install -U pip
7) Install required python packages inside the virtual envrionment: pip install -r requirements.txt
8) Run the app: python app.py
9) Close the app: Ctrl + C
10) To re-run go to 8), you might have to activate the virtual envrionment again if you are in a new terminal.
To use Yoga Coach web is intuitive. To start, just select the exercise you want to start with using the Prev/Next buttons and press the Start button. Place your computer so that your full body is visible in the webcam. Then, try to perform the exercise and listen to the feedback.
Yoga Coach web is based on the webframework Flask and Jinja. Most data is send through a websocket using Flask-SocketIO. The body pose detection is performed by MediaPipe Pose live in the backend using python. The pose landmarks from MediaPipe Pose are used to calculate their angles. For this project, 16 angles were considered to be relevant and defined in this .csv file. These angles were calculated for each exercise with a python script. The difference between of the correct angles and the current angles is caluclated. Then, a feedback text is created based on these differences. The text-to-speech (TTS) for feedback is performed by gTTS.
The workflow was inspired by Muley et. al 2020 and Thoutam et. al 2022.
About MediaPipe Pose:
Relevant properties:
Interesting but not relevant for this project (yet):
The EDA of the returned pose landmarks is documented in this Jupyter Notebook. Furthermore, how the angles were calculated and compared between correct and wrong pose by one example exercise.
To add new exercises you have to modify exercises.json. You can either create a new category and add the new exercise there or you can add it dirrectly to an exsisting category. You can also add an image of the exercise into the static/exercises folder respectively to the system in the JSON file. Please, insert it's origin and respect copyrights.
First, you need gather landmarks as JSON file of the new excercise. Yo can do so by either:
1) Use the image which you have added in the previous step. 2) Or record yourself with your webcam and perform the exercise correctly.
For 1):
Use gather_from_image.py by running python gather_from_image.py
inside the artifacts/
folder. Before running, setup the path to the image you want to use. The script will calculate body pose landmarks and save them as JSON file under artifacts/collect/
folder as well as a new image file with the landmarks printed in the same directory as your input image.
For 2):
Use gather_from_webcam.py by running python gather_from_webcam.py
inside the artifacts/
folder. Your webcam will open and start capturing your body pose landmarks per frame and save them as JSON files under artifacts/collect/
folder. To gather correct landmarks, place your computer so that your full body is visible in the webcam and then, perform the excersice correctly. You can stop the script by pressing Ctrl + C
. Remove files from artifacts/collect/
folder if you re-run the script.
Next, if you have used your webcam, you need to select one body pose with the landmarks represting this excersice best. To select the right one, you can plot the landmarks using plot.py. For that, run python plot.py
inside the artifacts/
folder.
To calculate the angles of these correct pose landmarks you can use angles.py by running python angles.py
inside the artifacts/
folder. This will print an array of angle values inside the terminal. Copy this array and insert it in the following step.
The copied angles from the previous step must then be inserted into "angles": []
for this new exercise in the exercises.json file. Now you are ready to re-run the server (see How to setup) and test your new created exercise.