KrishnaDN / speech-emotion-recognition-using-self-attention

Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From INTERSPEECH 2019
57 stars 11 forks source link

about the results #1

Open LoganLiu66 opened 4 years ago

LoganLiu66 commented 4 years ago

Hello, I want to ask if you got the same results with mentioned in this paper, I try my best but can't get the same results. Attached file is some detail about my code. I want to know is it something wrong with my code.Thanks code.txt

KrishnaDN commented 4 years ago

Hello, I want to ask if you got the same results with mentioned in this paper, I try my best but can't get the same results. Attached file is some detail about my code. I want to know is it something wrong with my code.Thanks code.txt

Hi , That'r right we won't get the same accuracy. We won't even get close to what they are reporting. I wrote back to the authors and asked them for their code so that I can compare. But they are not agreed to give their code. There is no other way to validate these results unless they give their codes.

youcaiSUN commented 4 years ago

Hi , That'r right we won't get the same accuracy. We won't even get close to what they are reporting. I wrote back to the authors and asked them for their code so that I can compare. But they are not agreed to give their code. There is no other way to validate these results unless they give their codes.

Hi, Krishna! Thanks for your sharing of code! What are the best WA and UA in your Implementation with and without multitask learning?

KrishnaDN commented 4 years ago

Hi , That'r right we won't get the same accuracy. We won't even get close to what they are reporting. I wrote back to the authors and asked them for their code so that I can compare. But they are not agreed to give their code. There is no other way to validate these results unless they give their codes.

Hi, Krishna! Thanks for your sharing of code! What are the best WA and UA in your Implementation with and without multitask learning?

As of now we get around 56% WA and I don't remember how much we get for UA. As per the paper without multi-task learning we get about the same accuracy mentioned in the paper. According to the paper, we should get huge boost-up when we add multi-task learning and clearly it is not happening. I spoke to the original authors but no luck.

jingyu-95 commented 4 years ago

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem?

KrishnaDN commented 3 years ago

hi, I cannot get the same results as the paper,too. And there is a overfitting. As the train WA is over 80%, but the test WA is about 50%. Do you have the same proplem?

Hi, Sorry for the delayed response. I have fixed some of the issues and added scheduled learning rate to reduce the learning rate based on the learning curve. There should not be any overfitting because number of layers and hidden units are exactly same as the paper. According my test, if you run 5 fold cross validation, then you should get ~54-55% average accuracy. My GPU servers are completely occupied a week or so. I will upload the pretrained models and loss curves soon as possible.