IIGROUP / MANIQA

[CVPRW 2022] MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
Apache License 2.0
307 stars 36 forks source link

KonIQ pretrained model hyperparameters #35

Open Mishra1995 opened 1 year ago

Mishra1995 commented 1 year ago

Hello authors,

Thanks for open sourcing this repository! I had one query regarding the pre-trained model shared for KonIQ dataset. In the paper you mentioned the following:

Snip20230511_1

I understood the following the previous IQA works, you split the dataset into 8:2 ratio five times using five different seeds. And during the test time, you took image crops of size 224x224 20 times and reported the average results.

But can you explain the following two points:

1) What do you mean by, "the final score is generated by predicting the mean score of these 20 images and all results are averaged by 10 times split". As far as I understood, the split created were 5 right?

2) The checkpoint you have provided for KonIQ is giving the best results on the val split created by one of the seed values right? (Please correct me if I am wrong in the understanding). Can you please share the hyperparameters of the this model then if this is the one the seed model. Or the metrics reported are from some ensemble model?

Kindly clarify,

Thanks!

TianheWu commented 1 year ago
  1. This is an error in our paper. Thanks. Each image is test by averaging cropped 20 times results.
  2. During the paper writing period, we didn't test our model on KonIQ dataset. I test it not long ago. I just split dataset one time with seed 2 or 20 (Sorry I forget it.). But, during the other experiments, I found that MANIQA has a stable performance on KonIQ datast. (Remember resizeing image into 224x224 not crop on koniq training stage)
Mishra1995 commented 1 year ago
  1. Thanks for clarifying that.
  2. Sure no issues in that. Can you tell me then in general how to take the best model for evaluation if let's say the same model was trained on datasets splits(8:2) created by different seeds . We would have obtained no. of model instances equivalent to the splits created.
Mishra1995 commented 1 year ago

Hi @TianheWu ,

It would be really helpful if you could please share your insights to the above query?

TianheWu commented 1 year ago

Hi, I just see that. Sorry, I can not get your mean. The split(8:2) is random.

Mishra1995 commented 1 year ago

Thanks for the reply, I understand that! My query is that when you divide the dataset let's say for eg that you took KonIQ and suppose you divided that using 5 random seeds. For let's say final deployment, which model will you select? Will you test the model on a separate held out set and check your model's performance on that held out set for all the best models obtained for the 5 splits?