Dalia-Sher / Speech-Emotion-Recognition-using-BLSTM-with-Attention

We present a study of a neural network based method for speech emotion recognition, using audio-only features. In the studied scheme, the acoustic features are extracted from the audio utterances and fed to a neural network that consists of CNN layers, BLSTM combined with an attention mechanism layer, and a fully-connected layer. To illustrate and analyze the classification capabilities of the network we used the t-SNE method. We evaluated our model using RAVDESS and IEMOCAP databases.
8 stars 1 forks source link

impulse response #1

Open Z-2148 opened 1 year ago

Z-2148 commented 1 year ago

Hello, I would like to ask where the. mat file for impulse response data is obtained? mat = scipy.io.loadmat( '/home/dsi/shermad1/Emotion_Recognition/Data/Reverberation_data/Impulse_response_Acoustic_Lab_Bar-IlanUniversity(Reverberation_0.610s)_3-3-3-8-3-3-3_2m_090.mat')

Dalia-Sher commented 1 year ago

Hey, you can find the file here: The Multichannel Impulse Response Database