We present a study of a neural network based method for speech emotion recognition, using audio-only features. In the studied scheme, the acoustic features are extracted from the audio utterances and fed to a neural network that consists of CNN layers, BLSTM combined with an attention mechanism layer, and a fully-connected layer. To illustrate and analyze the classification capabilities of the network we used the t-SNE method. We evaluated our model using RAVDESS and IEMOCAP databases.
Hello, I would like to ask where the. mat file for impulse response data is obtained?
mat = scipy.io.loadmat(
'/home/dsi/shermad1/Emotion_Recognition/Data/Reverberation_data/Impulse_response_Acoustic_Lab_Bar-IlanUniversity(Reverberation_0.610s)_3-3-3-8-3-3-3_2m_090.mat')
Hello, I would like to ask where the. mat file for impulse response data is obtained? mat = scipy.io.loadmat( '/home/dsi/shermad1/Emotion_Recognition/Data/Reverberation_data/Impulse_response_Acoustic_Lab_Bar-IlanUniversity(Reverberation_0.610s)_3-3-3-8-3-3-3_2m_090.mat')