Closed Korbrent closed 1 month ago
Guys I did some shit :smile:
The WAVs dataset is available at:
be happy!
the logic behind every file name:
"fileName" - name of the movie file rated
"VoiceVote" - the emotion (or emotions separated by a colon) with the majority vote for Voice ratings. (A, D, F, H, N, or S)
"VoiceLevel" - the numeric rating (or ratings separated by a colon) corresponding to the emotion(s) listed in "VoiceVote"
"FaceVote" - the emotion (or emotions separated by a colon) with the majority vote for Face ratings. (A, D, F, H, N, or S)
"FaceLevel" - the numeric rating (or ratings separated by a colon) corresponding to the emotion(s) listed in "FaceVote"
"MultiModalVote" - the emotion (or emotions separated by a colon) with the majority vote for MultiModal ratings. (A, D, F, H, N, or S)
"MultiModalLevel" - the numeric rating (or ratings separated by a colon) corresponding to the emotion(s) listed in "MultiModalVote"
logic file names.
The sentences were presented using different emotion (in parentheses is the three letter code used in the third part of the filename):
Anger (ANG)
Disgust (DIS)
Fear (FEA)
Happy/Joy (HAP)
Neutral (NEU)
Sad (SAD)
and emotion level (in parentheses is the two letter code used in the fourth part of the filename):
Low (LO)
Medium (MD)
High (HI)
Unspecified (XX)
.
Reassigned to Leo.
Fix the linker.py
file to grab the .wav
files from the online repository (hosted by @leosaa's cs page). The function prepare_dataset()
should be modified for fetching the .wav
files, and the function build_tensors()
will need to be modified as well for the labels.
Also, the file has been changed from a .ipynb
file into a .py
file at @Catmaniscatlord's request.
Even though we don't know what features we want or how many, start taking a hypothetical output data from Preprocessing into tensors to prepare for the model