Closed harryxu-yscz closed 6 years ago
Hi, Thanks for your interest. I'm not exactly sure how the equation works as the labels were generated by my collaborator Dr. Feng Zhou. My initial guess is that, after obtaining all dimensional scores, we divided them by the maximum score so that all of them lie within [0, 1]. But please contact Dr. Feng Zhou for details.
Thanks for the guidance! I emailed Dr. Feng Zhou about this. Meanwhile, could you provide the normalized valence/arousal/dominance values for the fer+ labels? The labels are: neutral, happiness, surprise, sadness, anger, disgust, fear, contempt
The labels used in our project can be found here https://drive.google.com/open?id=1s79cTqa9ftVfynUk0uZdQZUElozsaQ6l
The FER+ labels can be found here https://github.com/Microsoft/FERPlus
Hope this helps!
Hi, the labels in the imdb_DimEmotion.mat
are normalized dimensional scores for each image. I want to map the dimensional labels to the fer+ labels and calculate accuracy. What are the dimensional scores for the fer+ labels: neutral, happiness, surprise, sadness, anger, disgust, fear, contempt?
I see. It makes sense to reverse the dimensional score to the discrete labels to compare w.r.t accuracy. However, this is hard to do as the dimension scores are determined by all the labels in some way, and reversing it is an under-determined problem. I also thought about this way to compare with other classification-based methods, but it's non-trivial so that we couldn't do this.
"Fine-Grained Facial Expression Analysis Us- ing Dimensional Emotion Model" provided a way to calculate accuracy in part V. I want to replicate that way for calculating accuracy. All I need are the valence/arousal/dominance mean values and standard deviation for the labels.
I tried to find these values from the ANEW and English Lemma paper. There are 2 problems I can't solve:
Thus, I think it makes most sense for me to just ask for the scores for these labels
Hi, as I only have datasets with the two links, please contact Dr. Feng Zhou again for the data. He managed all the data, converted them, and sent me the final one used for training. Apologize for the inconvenience.
arousal/valence/dominance is the correct order.
I will ask Dr. Feng Zhou for these data then. Thanks!
@harryxu-yscz Sorry to bother you, can you show me how are the dimensional labels for each image derived?
Hi, Duy Anh,
Thanks for asking. Dr. Feng Zhou prepared the labeled data and he could answer your question.
On Fri, Apr 22, 2022 at 2:53 AM Duy Anh Nguyen @.***> wrote:
@harryxu-yscz https://github.com/harryxu-yscz Sorry to bother you, can you show me how are the dimensional labels for each image derived?
— Reply to this email directly, view it on GitHub https://github.com/aimerykong/Dimensional-Emotion-Analysis-of-Facial-Expression/issues/1#issuecomment-1106071469, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABRJSJDERWS2JYC3DBGPKKDVGJEEZANCNFSM4FLPELJA . You are receiving this because you commented.Message ID: <aimerykong/Dimensional-Emotion-Analysis-of-Facial-Expression/issues/1/1106071469 @github.com>
Thanks for your reply, i will ask Dr. Feng Zhou.
Thanks for sharing your work!
Could you elaborate more on how are the dimensional labels for each image derived? The labels provided in your
imdb_DimEmotion.mat
seem to be normalized between 0 and 1 but from the equation in your paper it's not normalized.