first of all thanks for your great work and providing this dataset!
Unfortunately I just realized that the labeling indices that you provide in the annotation files, i.e. {'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}, do not match how you specify them in the paper and - more importantly - the order of the weights in the README.md: [4.0, 15.0, 15.0, 3.0, 1.0, 6.0, 3.0]. Given the occurrence counts, I'm assuming that the weight 1.0 belongs to neutral, which is index 0 in the annotation files but index 4 in the weights.
Could you correct the weights so that users won't accidentally use the wrong assignments if they don't compute the weights themselves (like me)?
The correct weights are [1.0, 3.0, 15.0, 6.0, 3.0, 15.0, 4.0].
Hi,
first of all thanks for your great work and providing this dataset!
Unfortunately I just realized that the labeling indices that you provide in the annotation files, i.e.
{'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}
, do not match how you specify them in the paper and - more importantly - the order of the weights in the README.md:[4.0, 15.0, 15.0, 3.0, 1.0, 6.0, 3.0]
. Given the occurrence counts, I'm assuming that the weight1.0
belongs to neutral, which is index0
in the annotation files but index4
in the weights.Could you correct the weights so that users won't accidentally use the wrong assignments if they don't compute the weights themselves (like me)?
The correct weights are
[1.0, 3.0, 15.0, 6.0, 3.0, 15.0, 4.0]
.Best, Florian