-
Let's discuss the dataset here.
-
Very good work but I managed to obtain almost the same accuracy and loss only if I run the code in Ravdess_Music without Ravdess_Speech ! Else If I run it in both I get overfitting. The model has ver…
-
This is an inspiring piece of work and thank you for keeping it open source. I was just wondering whether it is demonstrating an overfitting situation. Specifically, audio from the video is the same a…
-
Hi again!
Thanks for fixing reading the Apple formatted wav files.
I'm trying to read sff files in Python as shown in the demo, however, it seems the complex values aren't saved in the sff file. I g…
-
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
X_train = np.array(trainfeatures)
y_train = np.array(trainlabel)
X_test = np.array(testfeatures)
y_test = np.arra…
-
In CNN_SpeechEmotion.ipynb
when i run
cnnhistory=model.fit(x_traincnn, y_train, batch_size=20, epochs=500, validation_data=(x_testcnn, y_test))
there is an error
could you please tell me how t…
-
having doubt regarding these lines, what is the data in that and which format .
mylist= os.listdir('RawData/')
data, sampling_rate = librosa.load('RawData/f11 (2).wav')
-
This llibrary looks awesome! I'm trying to repeat the first example locally.
I created a folder called `my-voice-analysis/Mysp` and placed the `myspsolution.praat` file in that folder and a wav …
-
Hi haixpham,
I have some trouble when reproduce your code.
I found my features length are always not equal to the AU labels when using RAVDESS dataset .
if it is right ?
Waiting for your reply, th…
-
There is a major issue regarding the data. The audio data in video files is the same as audio-only data. This means that you have duplicates in your data and this results in false 92% accuracy.