declare-lab / MELD

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
GNU General Public License v3.0
840 stars 205 forks source link

How do I convert a video to the data format required for this model? #42

Open She-yh opened 1 year ago

She-yh commented 1 year ago

I've got the bimodal_weights_emotion.hdf5 model from baseline, and the model can recognize the emotion from MELD dataset well. But I don't know how to recognize emotion from my own video. How do I convert a video to the data format required for this model?

I'm a beginner in multimodal emotion recognition. I'd really appreciate it if you could give me some tips

Triaill commented 1 month ago

I've got the bimodal_weights_emotion.hdf5 model from baseline, and the model can recognize the emotion from MELD dataset well. But I don't know how to recognize emotion from my own video. How do I convert a video to the data format required for this model?我从基线得到了 bimodal_weights_emotion.hdf5 模型,该模型可以很好地识别 MELD 数据集中的情感。但我不知道如何从我自己的视频中识别情感。如何将视频转换为该模型所需的数据格式?

I'm a beginner in multimodal emotion recognition. I'd really appreciate it if you could give me some tips我是多模式情感识别的初学者。如果您能给我一些建议,我将非常感激

I have the same problem, could you tell me how you solved it?