declare-lab / MELD

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
GNU General Public License v3.0
788 stars 200 forks source link

How can I know which face on the video sample is speaking to extract my own visual features ?? #50

Open david-gimeno opened 9 months ago

david-gimeno commented 9 months ago

First of all, I would like to congratulate all you for the big effort you did when creating this MELD dataset. However, I would also like to ask you if it is possible to obtain the facial landmarks (or any other kind of information) that will allow me to extract the face of the person actively speaking as you did for extracting the features you provide.

The reason is because I would like to explore my own visual features.

Thanks in advance. Best regards from Valencia,

David

rajendrac3 commented 4 weeks ago

Hi,

I am also trying to get the faces which are speaking in the video. This research paper does something similar https://arxiv.org/pdf/2101.03149 Here is the code implementation: https://github.com/facebookresearch/VisualVoice/tree/main