katha-ai / EmoTx-CVPR2023

[CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/2304.05634
https://katha-ai.github.io/projects/emotx
56 stars 8 forks source link

Preprocessing the data #5

Closed mok0102 closed 1 year ago

mok0102 commented 1 year ago

Hello, thank you for providing this wonderful work.

I become quite curious about encoding part, how to extract the features of scene/face/srt. It seems like the code for extracting those features are not released except for srt. I want to inference in other movie data.

Could you please provide some code so I can try inference in other data?

dhruvhacks commented 1 year ago

Hello @mok0102 , For now, you may refer to the feat_extractors branch in this repository. It contains the script that was used to extract scene-action features from MViT_v1 model. For other feature backbones, we will soon release the instructions and code!

Kilichbek commented 1 year ago

Hello, there thanks for the exciting work. Data preprocessing step (like feature extraction) would be extremely helpful for everyone to try this model on custom datasets. I would appreciate if you could guide us on extracting and preprocessing data.

dhruvhacks commented 1 year ago

Hello @Kilichbek, We will release the scripts that were used to extract character faces and scene features from the MovieGraphs dataset. The RoBERTa fine-tuning and feature extraction scripts have already been released. The action features (~scene features) extraction from MViT_V1 model is already shared in feat_extractors branch.

dhruvhacks commented 1 year ago

Hey @mok0102 and @Kilichbek , The feature extractor module and the instructions to re-extract the features from the MovieGraphs dataset are now released!

Refer to Feature Extraction for instructions.