Sreyan88 / MMER

Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition
https://arxiv.org/abs/2203.16794
66 stars 14 forks source link

How to train and validate the model? #3

Closed Coding511 closed 5 months ago

Coding511 commented 1 year ago

Dear author please guide how to use this model for reproducing the results? I have cloned the repo in my device. Now what steps need to follow for training in spyder?

Sreyan88 commented 1 year ago

Hello! Thank You for your interest. You can follow the steps in our repo. Also, we will soon update our repo with some changes by end of the week so you might want to wait too! @ramaneswaran would also be happy to help if you are stuck somewhere in the instructions.

Coding511 commented 1 year ago

@Sreyan88 @ramaneswaran I want to run the model to reproduce your results on IEMOCAP. The instructions are for the shell script. I am using the python editor Spyder; then how to run this?

Sreyan88 commented 1 year ago

@Utkarsh4430 can you please update our latest code with instructions? Thank You!

Sreyan88 commented 1 year ago

Hi @Coding511 ,

We will release our latest code with better results in 1-2 days. Thank You for your patience!

Coding511 commented 1 year ago

@Sreyan88 when will that code out? Please write in pytorch and which could be run on local machine.

Sreyan88 commented 1 year ago

Extremely sorry for the delay in response. Our paper has been accepted to InterSpeech 2023, and we have pushed new code with improved performance. Please let us know of any bugs that exist.