declare-lab / MELD

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
GNU General Public License v3.0
783 stars 199 forks source link

audio features #11

Open Mazagov opened 5 years ago

Mazagov commented 5 years ago

You mention that feature selection was done using opensmile with initial feature set of 6373, and then feature selection was performed.

What is the config file used for feature extraction ? is it Compare_2016? and how exactly did you do the feature selection? is it possible to provide the indices or names of selected features? Also, audio_emotion.pkl has 122 features that are all zeros out of the 300 selected so they do not provide any information

She-yh commented 1 year ago

You mention that feature selection was done using opensmile with initial feature set of 6373, and then feature selection was performed.

What is the config file used for feature extraction ? is it Compare_2016? and how exactly did you do the feature selection? is it possible to provide the indices or names of selected features? Also, audio_emotion.pkl has 122 features that are all zeros out of the 300 selected so they do not provide any information

@Mazagov Hi, I'm also working on it. Did you find the answer about how to do feature selection?