filby89 / body-face-emotion-recognition

Code for the paper "Fusing Body Posture with Facial Expressions for Joint Recognition of Affect in Child-Robot Interaction"
19 stars 5 forks source link

reproduce #4

Open shenyujie1125 opened 2 years ago

shenyujie1125 commented 2 years ago

Hello, I see that you use openpose to extract human keypoints, the coordinates of each point are a decimal less than one. But When I use openpose to extract keypoints, the coordinates are usually large. Did you use a normalization method or something wrong in my openpose code with python ? can you provide the detailed code of your openpose for me to further study and reproduce?

shenyujie1125 commented 2 years ago

Then about your facial feature extraction with resnet, can you provide your training code? I'm very interested in your research on body movements, thank you and look forward to hearing from you.

filby89 commented 2 years ago

Hi, for openpose I used the keypoint_scale "3" flag (see here https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/include/openpose/flags.hpp). The complete command was:

./openpose.bin --keypoint_scale 3 --model_pose 'BODY_25' --model_folder /openpose/models/ --write_images_format 'jpg' --image_dir {DIR} --face --hand --write_json {DIR_openpose}/json/ --write_images {DIR_openpose}/images/  --scale_number 4 --scale_gap 0.25  --display 0 --hand_scale_number 6 --hand_scale_range 0.4"

For the training of the facial feature extraction network you can check the train_affectnet.py file in

https://github.com/filby89/multimodal-emotion-recognition