NUSTM / FacialMMT

Code for paper "A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations"
GNU General Public License v3.0
55 stars 3 forks source link

could you share the code in preprocessing MELD dataset and saving the xxxx_utt.pkl? (vision/audio modality) #9

Closed Qzjzl20000 closed 1 month ago

Qzjzl20000 commented 3 months ago

Hello, thank you very much for open sourcing this project. It is very detailed and excellent.

For the data prprocessing of audio and video modalities of the MELD dataset, I have found the xxxx_utt.pkl file in the provided BaiduNetdisks link. (e.g. FacialMMT/preprocess_data/T+A+V/meld_test_audio_utt.pkl, or vision_utt.pkl) Can you also share the original dataset preprocessing and save the xxxx_utt.pkl code in vision and audio modality?

Thank you very much and respect.

wjzhengnlp commented 1 month ago

Thanks for your attention to our work. I have uploaded the related code for extracting visual and acoustic features to BaiduNetdisk. If you have any further questions, feel free to email me.