vijuSR / facial_emotion_recognition__EMOJIFIER

Recognizes the facial emotion and overlays emoji, equivalent to the emotion, on the persons face.
MIT License
96 stars 33 forks source link
convolutional-neural-networks deep-learning emojification facial-expression-recognition python tensorflow

facial_emotion_recognition__EMOJIFIER

Recognizes the facial emotion and overlays emoji, equivalent to the emotion, on the persons face.

Some results First!

res

Getting Started

  1. Get the code:

    • Using SSH: git clone git@github.com:vijuSR/facial_emotion_recognition__EMOJIFIER.git
      OR
    • Using HTTP: git clone https://github.com/vijuSR/facial_emotion_recognition__EMOJIFIER.git
  2. Setup the Virtual Environment (Recommended):

    • Create the virtual environment
      • python3 -m venv </path/to/venv>
    • Activate your virtual-environment
      • Linux: source </path/to/venv>/bin/activate
      • Windows: cd </path/to/venv> then .\Scripts\activate
    • Install the requirements
      • cd <root-dir-of-project>
      • pip install --upgrade -I -r requirements.txt

        Install any missing requirement with pip install <package-name>

        That's all for the setup ! :smiley:

Making it work for you:

There are 4 steps from nothing (not even a single image) to getting the result as shown above.

And you don't need anything extra than this repo.

  • STEP 0 - define your EMOTION-MAP :smile: :heart: :clap:
    1. cd <to-repo-root-dir>
    2. Open the 'emotion_map.json'
    3. Change this mapping as you desire. You need to write the "emotion-name". Don't worry for the numeric-value assigned, only requirement is they should be unique.
    4. There must be a .png emoji image file in the '/emoji' folder for every "emotion-name" mentioned in the emotion_map.json.
    5. Open the 'config.ini' file and change the path to "haarcascade_frontalface_default.xml" file path on your system. For example on my system it's: > "G:/VENVIRONMENT/computer_vision/Lib/site-packages/cv2/data/haarcascade_frontalface_default.xml" where > "G:/VENVIRONMENT/computer_vision" is my virtual environment path.
    6. 'config.ini' contains the hyperparameters of the model. These will depend on the model and the dataset size. The default one should work fine for current model and a dataset size of around 1.2k to 3k. IT'S HIGHLY RECOMMENDED TO PLAY AROUND WITH THEM.

Its time to show your emotions :heart:

P.S. -- The model was trained on my facial images only, but was able to detect the expressions of my brother as well.

result