Closed saeu5407 closed 1 year ago
Hello! Thanks for this question. In the current implementation (see section "Multi-task: FER+Valence-Arousal" in train_emotions-pytorch.ipynb), the valence and arousal values are unbounded. I tried using tanh or similar activations, but the training was unstable. Hence, it is possible that predicted valence and arousal are out of range [-1;1] but you could simply use something like np.clip to fit them into the range [-1;1]. I should also clarify that valence and arousal were mainly used for multi-task learning to improve the quality of facial expression recognition. Hence, the quality of valence/arousal prediction may be imperfect
Thank you. Then, can I understand that the training itself was conducted between [-1, 1], but the result value can be other than that?
Yes, you're correct. However, as most of valence and arousal values in the training set have absolute value significantly lower than 1, I did not notice any cases where the prediction of my network were out of bounds [-1;1]
Thank you again for the code and explanation you provided.
Thank you for your great model.
Is the range of valence and arousal of enot_b0_8_va_mtl.pt [-1 to 1] correct?
Based on the Affectnet dataset, it looks like [-1 to 1], but I want to know the exact range.
Thank you.