avinabsaha / ReIQA

Official implementation for CVPR2023 Paper "Re-IQA : Unsupervised Learning for Image Quality Assessment in the Wild"
https://arxiv.org/abs/2304.00451
MIT License
87 stars 7 forks source link

Why Linear Regressor? #6

Closed 424jczhang closed 11 months ago

424jczhang commented 11 months ago

Good work!I'm interested in your work!and I'd like to ask a question that I hope will be answered. My understanding is that the relationship between the two features and the quality score should be non-linear. Why is a linear regression recommended? And can you please share your regressor code?

avinabsaha commented 11 months ago

Traditionally, IQA/VQA algorithms employed Support Vector Regression (SVR) with Kernel Tricks to transform small-dimensional handcrafted features into a higher-dimensional space for the purpose of regressing features with mean opinion scores. The fundamental assumption was that, once the data resides in a higher-dimensional space, the data points should be roughly linear in the high dimension space and the support vectors can effectively regress the features to mean opinion scores. Regarding the feature vector generated by Re-IQA, each image's feature vector already spans 8192 dimensions, which is considerably high. Accordingly, based on the prevailing assumption, we opted for a Linear Regressor to regress from the feature vector with mean opinion scores. You could also use non-linear regressors (like SVR with kernel tricks), but that would considerably increase training time and/or lead to overfitting.

Regarding code, it should be simple using Sklearn, however, each dataset requires hyperparameter optimization to reproduce results.

424jczhang commented 11 months ago

Thank you very much for your reply! Which do you think the linear regression or MPL fit better?

avinabsaha commented 11 months ago

Please refer to the Training Linear Regressor Section in the README of the repository.

424jczhang commented 11 months ago

Thanks a lot!