IIGROUP / MANIQA

[CVPRW 2022] MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
Apache License 2.0
307 stars 36 forks source link

Cannot achieve reported test results using pretrained model #20

Closed raresionut1 closed 1 year ago

raresionut1 commented 2 years ago

Hello, and congratulations for the paper and winning the NR-IQA track at NTIRE 2022 competition.

I've been trying to replicate the test results for the NTIRE 2022 test set that were reported in the paper using your code and the model checkpoint you have provided, but without success. Running the inference script and uploading the output to codalab server, I've managed to obtain the following results: SROCC: 0.65, PLCC: 0.67

However, looking at the competition's leaderboard, I see that you have obtained SROCC: 0.70, PLCC: 0.74

Is there a discrepancy between the github code / model checkpoint and the final model that you have used in the competition? In case this difference in scores is caused only by the usage of ensembles, can you kindly provide more details about which models you have used in the final ensemble, as well as the ensemble strategy? Thank you!

ch-andrei commented 2 years ago

@raresionut1 kudos for your work

Btw did you try MANIQA with other datasets? TID, KADID...?

raresionut1 commented 2 years ago

No, not yet. Just on PIPAL

ch-andrei commented 2 years ago

If possible, please report here what you find :)

TianheWu commented 2 years ago

We use lots of ensemble methods to boost our model and get the leaderboard score.

TianheWu commented 1 year ago

@ch-andrei I will release other datasets codes and checkpoints soon!