This project is an Emotion-Based Music Recommender system that uses facial and hand landmarks detected through a webcam to infer the user's emotion and recommend music based on that emotion.
model.h5
) classifies the detected landmarks into a specific emotion. git clone https://github.com/Gayathri-Selvaganapathi/emotion_based_music_recommendation.git
cd emotion_based_music_recommendation
Ensure you have Python 3.7+ installed, then run:
pip install -r requirements.txt
Prepare the model and label files:
streamlit run app.py
Interact with the application:
* app.py: The main script to run the application.
* model.h5: The pre-trained Keras model for emotion detection.
* labels.npy: The labels corresponding to the emotions.
Emotion Detection: The application uses Mediapipe to process video frames from the webcam. It detects facial landmarks and hand landmarks.
Feature Extraction: The detected landmarks are used as features for emotion classification. These features are normalized relative to specific key points on the face and hands.
Prediction: The features are passed through a pre-trained Keras model that predicts the user's emotion.
Music Recommendation: Based on the predicted emotion, the application suggests music by generating a YouTube search query.