facebookresearch / mmf

A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
https://mmf.sh/
Other
5.5k stars 939 forks source link

Better benchmark results in the updated version of the paper #967

Open jkubajek opened 3 years ago

jkubajek commented 3 years ago

❓ Questions and Help

Hi, I have just noticed a huge gap between performance of the benchmark models in the initial version of the paper (The Hateful Memes Challenge:Detecting Hate Speech in Multimodal Memes) and its latest version (Apr 2021). What caused so large increase in the performance of models? image image

apsdehal commented 3 years ago

Hi, The first one in v1 is on test_seen set which was used in Phase 1 of the Hateful Memes Challenge while the second was updated to use test_unseen which was used in Phase 2 of the Hateful Memes Challenge. (I believe this has been asked before somewhere in MMF's GitHub issues).

jkubajek commented 3 years ago

Hi, thanks @apsdehal. I was looking through previous issues but I did not find the exact answer to my question.

apsdehal commented 3 years ago

No worries. I hope the response resolves your questions.

jkubajek commented 3 years ago

So to summarize, you didn't change the way of training models or their architecture but you only swap test datasets? I ask after this because next week I will give a presentation on Data Science Summit conference about my solution to the Hateful Memes Challenge and I want to be sure.