Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From INTERSPEECH 2019
Hi,
Thank you for sharing your code!
I read the README file but it only contains the procedure for training audio only model.
I wonder if you can kindly share the procedure for training multi-task model with gender classification ?
Thank you.
Hi, Thank you for sharing your code! I read the README file but it only contains the procedure for training audio only model. I wonder if you can kindly share the procedure for training multi-task model with gender classification ? Thank you.