Open AdamarisAinsley opened 5 years ago
Dataset is not loaded into the file, when i try another speech dataset (SAVEE/CASIA.IEMOCAP), help me to know ,how to solve this issue?
Were there any error messages? I just solved these data problems and ran the program successfully, so maybe i can help u with these stuff.
It does not give any error but it gives 100% training accuracy, but reported accuracy on IEMOCAP is 82%...... [image: image.png]
Have u modified the dataset.py? I think the validation in find_best_model.py is based on the file name of the data. Would you post the name of the data u used here as a sample?
I made changes in dataset.py file as follows: import os import subprocess as sp import itertools import librosa import glob import wave import numpy as np import python_speech_features as ps import cPickle import csv
class Dataset:
def __init__(self, path, dataset, decode=False):
self.dataset = dataset
if dataset == "IEMOCAP":
self.classes = {0: 'ang', 1: 'sad', 2: 'hap', 3: 'neu', 4: 'fea', 5: 'fru', 6:'exc' }
self.get_IEMOCAP_dataset(path)
def get_IEMOCAP_dataset(self, path):
males = ['Ses01M','Ses02M','Ses03M','Ses04M','Ses05M']
females = ['Ses01F','Ses02F','Ses03F','Ses04F','Ses05F']
try:
classes = {v: k for k, v in self.classes.iteritems()}
except AttributeError:
classes = {v: k for k, v in self.classes.items()}
self.targets = []
self.data = []
self.train_sets = []
self.test_sets = []
get_data = True
for speak_test in itertools.product(males, females):
i = 0
train = []
test = []
for audio in os.listdir(path):
if(audio[0] == 'S'):
sub_dir = os.path.join(path,audio,'sentences/wav')
emoevl = os.path.join(path,audio,'dialog/EmoEvaluation')
for sess in os.listdir(sub_dir):
if(sess[7] == 'i'):
emotdir = emoevl+'/'+sess+'.txt'
emot_map = {}
with open(emotdir,'r') as emot_to_read:
while True:
line = emot_to_read.readline()
if not line:
break
if(line[0] == '['):
t = line.split()
emot_map[t[3]] = t[4]
file_dir = os.path.join(sub_dir, sess, '*.wav')
files = glob.glob(file_dir)
for filename in files:
wavname = filename.split("/")[-1][:-4]
f = open("Audio3.csv","a")
f.write(wavname)
f.write("\n")
emotion = emot_map[wavname]
f = open("features3.csv","a")
f.write(filename)
f.write("\n")
with open('features3.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=' ')
for row in readCSV:
audio_path = row[0]
y, sr = librosa.load(audio_path, sr=16000)
if get_data:
self.data.append((y, sr))
self.targets.append(classes[emotion])
with open('Audio3.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=' ')
for row in readCSV:
audio1 = row[0]
if audio1[:6] in speak_test:
test.append(i)
else:
train.append(i)
i = i + 1
self.train_sets.append(train)
self.test_sets.append(test)
get_data = False
"if audio1[:6] in speak_test: test.append(i) ....... train.append(i)" I think you can print out test[] and train[] here to check the data at first.
Name of the Wav file is Ses02F_impro01_F000.wav ,and after print train and test data here in the if else statement it is just printing
[image: image.png] same as in the case of berlin [image: image.png] let me know things are working same or not?
On Mon, Sep 9, 2019 at 6:38 AM Terebi1996 notifications@github.com wrote:
"if audio1[:6] in speak_test: test.append(i) ....... train.append(i)" I think you can print out test[] and train[] here to check the data at first.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/RayanWang/Speech_emotion_recognition_BLSTM/issues/17?email_source=notifications&email_token=AMQHN5I3IIXJU4GKQBVWVI3QIWSIRA5CNFSM4H44NWU2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6GAMDI#issuecomment-529270285, or mute the thread https://github.com/notifications/unsubscribe-auth/AMQHN5MRJV3HF6IUGPHBEB3QIWSIRANCNFSM4H44NWUQ .
Your image file seems can't be shown here :(
IEMOCAP: Berlin:
you can add my discord. if u like
you can add my discord. if u like
what is your discord tag?
Dataset is not loaded into the file, when i try another speech dataset (SAVEE/CASIA.IEMOCAP), help me to know ,how to solve this issue?