bklynhlth / openwillis

Python library for digital measurement of health
Other
16 stars 8 forks source link

Emotional Expressivity NaN #152

Closed joshwongg closed 3 weeks ago

joshwongg commented 1 month ago

Hi,

I've been trying to run the emotional expressivity function, however, all my results come up with "NaN". I've tested to see if the files open, as well as the video quality, and it is all fine. There is no error message that occurs when I am running the code either. I'm running the code below through miniconda.

import openwillis as ow
import tensorflow as tf
import pandas as pd

physical_devices = tf.config.list_physical_devices('GPU')
if physical_devices:
    tf.config.experimental.set_memory_growth(physical_devices[0], True)

filepath = r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\joshtrialwatch.mp4"
baseline_filepath = r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\joshtrialsuppress.mp4"

framewise, summary = ow.emotional_expressivity(
    filepath=filepath,
    baseline_filepath=baseline_filepath
)

print("Framewise:", framewise)
print("Summary:", summary)

if isinstance(framewise, pd.DataFrame):
    framewise.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Framewise_Emotional_Expressitivity.csv", index=False)
if isinstance(summary, pd.DataFrame):
    summary.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Summary_Emotional_Expressitivity.csv", index=False)

Any help is greatly appreciated! Thanks :)

GeorgeEfstathiadis commented 1 month ago

When running the code if an error occurred you should see an error message starting with 'Error in facial emotion calculation...'. The fact that you don't suggests that maybe the machine running it quits for some reason which is harder to debug; do you find that to be the case? You could check it by running the code in Python line-by-line and see if it exits Python or the terminal when running the emotional_expressivity function.

Additionally you can try this code, to ensure the issue doesn't come from Deepface running it on a single frame of your video:

import cv2
import pandas as pd
from deepface import DeepFace

cap = cv2.VideoCapture(filepath)
ret, img = cap.read()

if not ret:
    raise ValueError("Error: Couldn't read the video.")

img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

face_analysis = DeepFace.analyze(img_path=img_rgb, actions=['emotion'])
emotions = face_analysis[0]['emotion']

print(face_analysis)
print(emotions)
joshwongg commented 1 month ago

it comes up with an error:

from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, PReLU, Flatten, Softmax
ModuleNotFoundError: No module named 'tensorflow.keras'

I'm not sure why this occurs, I have installed keras and tensorflow (both v 2.15.0) and deepface (0.0.92). Is tensorflow.keras a separate module from tensorflow and keras?

GeorgeEfstathiadis commented 1 month ago

There is no separate tensorflow.keras package, this must be some weird environment conflict. Your versions of the packages are up to date, so that looks ok.

I would try uninstalling tensorflow, keras and deepface. And reinstall tensorflow==2.15.0 and deepface==0.0.92 in this order. You don't need to install keras, it should be included in tensorflow. Let me know if that works.

Otherwise you may have to create a separate virtual environment and reinstall openwillis there.

joshwongg commented 1 month ago

Reinstalling tensorflow, kerasand deepface, or creating a new environment and reinstalling openwillis still displayed the same error message.

I ended up installing tf_keras==2.15.0, as well as adding from tensorflow import keras which has gotten rid of the previous error message. However, it still returns NaN when running the Emotional Expressivity function. With tf_keras==2.15.0 installed, the emotional check code you provided now returns the below code

raise ValueError(
ValueError: Face could not be detected in numpy array.Please confirm that the picture is a face photo or consider to set enforce_detection param to False.

The quality of the videos I've uploaded are all clear and of good quality, and work for the facial expressivity function. I've also tried adding enforce_detection=False into the Emotional Expressivity code, which didn't work:

framewise, summary = ow.emotional_expressivity(
        filepath=filepath,
        baseline_filepath=baseline_filepath,
        enforce_detection=False
)

TypeError: emotional_expressivity() got an unexpected keyword argument 'enforce_detection'

Any help is greatly appreciated! Thanks!

GeorgeEfstathiadis commented 1 month ago

When running the code if an error occurred you should see an error message starting with 'Error in facial emotion calculation...'. The fact that you don't suggests that maybe the machine running it quits for some reason which is harder to debug; do you find that to be the case? You could check it by running the code in Python line-by-line and see if it exits Python or the terminal when running the emotional_expressivity function.

Additionally you can try this code, to ensure the issue doesn't come from Deepface running it on a single frame of your video:

import cv2
import pandas as pd
from deepface import DeepFace

cap = cv2.VideoCapture(filepath)
ret, img = cap.read()

if not ret:
    raise ValueError("Error: Couldn't read the video.")

img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

face_analysis = DeepFace.analyze(img_path=img_rgb, actions=['emotion'])
emotions = face_analysis[0]['emotion']

print(face_analysis)
print(emotions)

Just to confirm you used this code and it provided you with the error message no face detected right? If deepface cannot detect a face on any of the frames, that would be the reason the emotional_expressivity output would be all NaNs.

For emotional expressivity we use a different facial detection model compared to facial expressivity, which would allow for the possibility (although I agree seems very unlikely) one of the two to detect a face in your video and another not be able to detect it. Also enforce_detection=False is not an argument of the emotional expressivity function, it probably stems from DeepFace.analyze, but you shouldn't need to change that.

Have you tried using different videos? If this issue is not occurring for other videos, it would confirm our hypothesis that the issue stems from that specific video and deepface not recognizing a face on it. If you get the same empty response for other videos as well, we need to debug further as that is not normal.

kcmcveigh commented 1 month ago

Hey Josh,

Another quick question to add to Georgios' - does this happen immediately or does the function run for a while (10 -30 seconds or longer). This will help give us an idea of where in the function things might be going wrong.

joshwongg commented 1 month ago

When running the code if an error occurred you should see an error message starting with 'Error in facial emotion calculation...'. The fact that you don't suggests that maybe the machine running it quits for some reason which is harder to debug; do you find that to be the case? You could check it by running the code in Python line-by-line and see if it exits Python or the terminal when running the emotional_expressivity function. Additionally you can try this code, to ensure the issue doesn't come from Deepface running it on a single frame of your video:

import cv2
import pandas as pd
from deepface import DeepFace

cap = cv2.VideoCapture(filepath)
ret, img = cap.read()

if not ret:
    raise ValueError("Error: Couldn't read the video.")

img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

face_analysis = DeepFace.analyze(img_path=img_rgb, actions=['emotion'])
emotions = face_analysis[0]['emotion']

print(face_analysis)
print(emotions)

Just to confirm you used this code and it provided you with the error message no face detected right? If deepface cannot detect a face on any of the frames, that would be the reason the emotional_expressivity output would be all NaNs.

For emotional expressivity we use a different facial detection model compared to facial expressivity, which would allow for the possibility (although I agree seems very unlikely) one of the two to detect a face in your video and another not be able to detect it. Also enforce_detection=False is not an argument of the emotional expressivity function, it probably stems from DeepFace.analyze, but you shouldn't need to change that.

Have you tried using different videos? If this issue is not occurring for other videos, it would confirm our hypothesis that the issue stems from that specific video and deepface not recognizing a face on it. If you get the same empty response for other videos as well, we need to debug further as that is not normal.

Yes, I've tried ~ 20 different videos with this code, and none have worked for the code. The specific output is below:

Traceback (most recent call last): File "C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\openwillis-main\emotioncheck.py", line 14, in <module> face_analysis = DeepFace.analyze(img_path=img_rgb, actions=['emotion']) File "C:\Users\jjsw972\AppData\Local\miniconda3\envs\openwillis_env\lib\site-packages\deepface\DeepFace.py", line 247, in analyze return demography.analyze( File "C:\Users\jjsw972\AppData\Local\miniconda3\envs\openwillis_env\lib\site-packages\deepface\modules\demography.py", line 123, in analyze img_objs = detection.extract_faces( File "C:\Users\jjsw972\AppData\Local\miniconda3\envs\openwillis_env\lib\site-packages\deepface\modules\detection.py", line 96, in extract_faces raise ValueError( ValueError: Face could not be detected in numpy array.Please confirm that the picture is a face photo or consider to set enforce_detection param to False.

The error code occurs almost instantly, usually running for ~ 3 seconds. I hope this information is helpful!

Thanks :)

kcmcveigh commented 1 month ago

Hey Josh,

Is this error for the test code Georgios sent? If yes it suggests there might an issue on for the library deepface which we use for some of our face analyses.

Would it possible for you to send an example piece of code and a video you've been using?

For videos if the you are using videos are clinical interviews or with research participants and cannot be shared, you can try capturing a video that you'd be comfortable sharing either with a webcam or by taking a video from a publicly accessible source like youtube. Then with that shareable video confirm you get the same error.

The code + sample video will help us debug what's going on!

Thanks!

joshwongg commented 1 month ago

I've taken some higher quality videos of myself with my phone. When using the emotion check code below, the new, higher quality videos now have a new output, with values that I assume would be from the emotional expressivity function

import cv2
import pandas as pd
from deepface import DeepFace

cap = cv2.VideoCapture(r"C:\Users\jjsw972\Downloads\josh weather expressive.mp4")
ret, img = cap.read()

if not ret:
    raise ValueError("Error: Couldn't read the video.")

img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

face_analysis = DeepFace.analyze(img_path=img_rgb, actions=['emotion'])
emotions = face_analysis[0]['emotion']

print(face_analysis)
print(emotions)
[{'emotion': {'angry': 1.6640982651173035, 'disgust': 0.0018090942090209848, 'fear': 58.86023395420515, 'happy': 0.07155157353663882, 'sad': 25.194077205304932, 'surprise': 0.41202444245066894, 'neutral': 13.796206640971103}, 'dominant_emotion': 'fear', 'region': {'x': 514, 'y': 200, 'w': 347, 'h': 347, 'left_eye': (750, 322), 'right_eye': (612, 325)}, 'face_confidence': 0.95}]
{'angry': 1.6640982651173035, 'disgust': 0.0018090942090209848, 'fear': 58.86023395420515, 'happy': 0.07155157353663882, 'sad': 25.194077205304932, 'surprise': 0.41202444245066894, 'neutral': 13.796206640971103}

However, I also tried using this video for the emotional expressivity function using the code below:

import openwillis as ow
import tensorflow as tf
from tensorflow import keras
import pandas as pd

physical_devices = tf.config.list_physical_devices('GPU')
if physical_devices:
    tf.config.experimental.set_memory_growth(physical_devices[0], True)

config = tf.compat.v1.ConfigProto(gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.8)
#device_count = {'GPU': 1}
)
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(session)

filepath = r"C:\Users\jjsw972\Downloads\josh weather expressive.mp4"
baseline_filepath = r"C:\Users\jjsw972\Downloads\josh weather baseline.mp4"

framewise, summary = ow.emotional_expressivity(
        filepath=filepath,
        baseline_filepath=baseline_filepath,
)

print("Framewise:", framewise)
print("Summary:", summary)

if isinstance(framewise, pd.DataFrame):
    framewise.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Framewise_Emotional_Expressitivity.csv", index=False)
if isinstance(summary, pd.DataFrame):
    summary.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Summary_Emotional_Expressitivity.csv", index=False)

This code, with the new higher quality videos still return NaN. I've also attached the videos below

Thank you!

https://github.com/user-attachments/assets/7cc92640-8b0e-402b-abcf-e2f4d909ea23

https://github.com/user-attachments/assets/d7dd64a1-c971-451a-9706-1f97b7e9251d

kcmcveigh commented 1 month ago

Hey Josh - thanks for sending over the example videos.

Both these videos worked for me with the code you provided - even after I completely reinstalled openwillis with a fresh environment. So let's check a few things:

  1. Can you check what version of openwillis and deepface you have installed? I'm wondering if there is a mismatch between your version of openwillis and your version of deepface (an earlier version of openwillis used a version of deepface which is no longer compatible with the current version). The version for openwillis should be 2.2.4 and deepface should be 0.0.92

  2. Can you try running this code - it uses one of the helper functions in facial_expressivity so we can help pinpoint where things are going awry. My guess is to get this code to run you'll have to do the same sort of tensorflow/gpu stuff you did for the previous code. Also I think it's really likely this code will crash but the hope is that the crash just gives us an idea where the issue is. .

    
    from openwillis.measures.video.facial_emotion import run_deepface

measures_dict = { "angry": "angry", "disgust": "disgust", "fear": "fear", "happy": "happiness", "sad": "sadness", "surprise": "surprise", "neutral": "neutral", "comp_exp": "composite", }

the first argument is just your video path (for either video)

out_list = run_deepface("emo_express_express.mp4", measures_dict) print(out_list)



Thanks for working with us on this!
joshwongg commented 1 month ago

I've checked the versions of openwillis and deepface, and they are both the proper versions.

From running the code you've just provided, the output is as follows:

 File "C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\openwillis-main\checkcheck.py", line 14, in <module>
    out_list = run_deepface("emo_express_express.mp4", measures_dict)
  File "C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\openwillis-main\openwillis\measures\video\facial_emotion.py", line 43, in run_deepface
    cols = [measures['angry'], measures['disgust'], measures['fear'], measures['happy'], measures['sad'],
KeyError: 'angry'

Thanks so much for your help again!

kcmcveigh commented 1 month ago

Hey Josh,

Thanks for checking on the version info. I've update the script I sent yesterday to also check this info as you run it just to confirm. Also in this updated code make sure you swap out the video path name to the file path for your video. Second make sure measures_dict is defined exactly the same way is it is in the code below. As that seems to be the issue you had an error with for this script but I'm not completely sure why this would happen in this.

Also when you run this code please copy and paste all the output as it prints out version info for python,deepface, and openwilliis:

import openwillis as ow
import deepface
from openwillis.measures.video.facial_emotion import run_deepface
import sys
import subprocess

def get_package_version(package_name):
  result = subprocess.run([sys.executable, "-m", "pip", "show", package_name], capture_output=True, text=True)
  for line in result.stdout.splitlines():
    if line.startswith("Version:"):
      return line.split(" ")[1]
  return "Unknown"

print("Deepface version: ", deepface.__version__)
print("OpenWillis version: ", get_package_version("openwillis"))
print("Python version: ", sys.version)

measures_dict = {
  "angry": "angry",
  "disgust": "disgust",
  "fear": "fear",
  "happy": "happiness",
  "sad": "sadness",
  "surprise": "surprise",
  "neutral": "neutral",
  "comp_exp": "composite",
}

video_path = "Your Video Path Here"
out_list = run_deepface(video_path, measures_dict)

print(out_list)

Let us know how it goes!

joshwongg commented 4 weeks ago

All the output from the code you just gave is as below:

I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\jjsw972\AppData\Local\miniconda3\envs\openwillis_env\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

################################################################################
### WARNING, path does not exist: KALDI_ROOT=/mnt/matylda5/iveselyk/Tools/kaldi-trunk
###          (please add 'export KALDI_ROOT=<your_path>' in your $HOME/.profile)
###          (or run as: KALDI_ROOT=<your_path> python <your_script>.py)
################################################################################

Deepface version:  0.0.92
OpenWillis version:  2.2.4
Python version:  3.10.15 | packaged by conda-forge | (main, Sep 30 2024, 17:41:41) [MSC v.1941 64 bit (AMD64)]
2024-10-17 23:08:04.337700: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

After that, the output is all NaN values as below. This happened for both videos.

[   frame  angry  disgust  fear  happiness  sadness  surprise  neutral
0      0    NaN      NaN   NaN        NaN      NaN       NaN      NaN,    frame  angry  disgust  fear  happiness  sadness  surprise  neutral
0      1    NaN      NaN   NaN        NaN      NaN       NaN      NaN,    frame  angry  disgust  fear  happiness  sadness  surprise  neutral
0      2    NaN      NaN   NaN        NaN      NaN       NaN      NaN,    frame  angry  disgust  fear  happiness  sadness  surprise  neutral
0      3    NaN      NaN   NaN        NaN      NaN       NaN      NaN,    frame  angry  disgust  fear  happiness  sadness  surprise  neutral

Thank you so much again for your help! Really appreciate it :)

kcmcveigh commented 4 weeks ago

Hey Josh,

I think it might be worth us hoping on a call to see if we debug this synchronously instead of async. Want to shoot me an email at kieran.mcveighc@gmail.com with sometimes next week that might work for you (preferable between 8am and 7pm EST). Thanks!

-Kieran