Tandon-A / emotic

PyTorch implementation of Emotic CNN methodology to recognize emotions in images using context information.
MIT License
134 stars 47 forks source link

Why I got low mAP comparing with that in paper? #8

Closed YHDASHEN closed 3 years ago

YHDASHEN commented 3 years ago

Hi, I ran the training step and got the models, then I ran test and the mAP was only 0.20610. Firstly, I thought maybe it's because of my bad training. So I downloaded given trained models and thresholds and the mAP of test step is still low. Then I thought about preprocessing step of dataset, I received errors like: libpng warning: iCCP: known incorrect sRGB profile, libpng warning: iCCP: extra compressed data and Corrupt JPEG data: 44 extraneous bytes before marker 0xd9, I don't know whether they have some bad influence. Then I saw the result of test step from this notebook. It looks similar to my result. But I wonder why mAP is quite different from that in paper. I'd appreciate if you could give me some hints.

Best regards, Hui

Tandon-A commented 3 years ago

@YHDASHEN

Hello,

I agree with you that it is very hard to reproduce the author's performance. I refer you to the earlier issue, #1, where I have mentioned a few things which improved the performance a bit. I was able to achieve the performance of 26.02 mAP using the strategies listed there.

One more thing that worked for me was setting the person's pixels to zero in the context image. This way the network was forced to look at other areas in the context image.

You can try these approaches and that would surely boost the results.

Some corrupted images in the dataset were failing the preprocessing, so when such errors come, I think the code automatically skips them.

Best Regards, Abhishek

YHDASHEN commented 3 years ago

Hi Abhishek,

thank you so much for your reply. I'll look into it.

Best regards, Hui