Closed nathanianah closed 1 year ago
Hello! Thanks for your interest in my work. You need to convert the input image into RGB format. Please, look at my demo source code. However, you can still get slightly different results than reported in my papers because I use Pillow to load and resize an image, and as far as I remember, there are small differences in accuracy. To reproduce my results, you could take a look at the CodeOcean test script run.py
Thank you for your quick response!
Is the affectnet_val.tar
file the validation split you used for evaluation and for the paper?
As far as I know, it seems to differ from the validation set of my version of AffectNet (downloaded a week ago).
Yes, I downloaded AffectNet several years ago and used train/validation data from that repository. I believe, my tar file can contain the files after some post-processing, e.g., face extraction. The format has been changed a bit, and other contributors modified my code to work with the new version, see an example here. As far as I know, the results for that version were more or less the same as mine, so you definitely need to use correct pre-processing and look at code examples in my repository. I believe, there were several closed issues with questions similar to yours, e.g. issue 10, take a look at them also
However, I'm not familiar with any changes of AffectNet. You should ask any questions related to this dataset to its authors
I am using the HSEmotion package to classify emotions on the AffectNet validation set but my accuracies are worse than what is in the README:
What do you think might be causing this discrepancy?
Here's an excerpt of how I'm using the package: