zwx8981 / UNIQUE

The repository for 'Uncertainty-aware blind image quality assessment in the laboratory and wild' and 'Learning to blindly assess image quality in the laboratory and wild'
Apache License 2.0
127 stars 10 forks source link

Negative predicted score #4

Closed pencilzhang closed 3 years ago

pencilzhang commented 3 years ago

Hi, I simply train on single koniq-10k dataset with mos between 1 -5 , I got negative predicted score on some images. Do you think that is possible?

zwx8981 commented 3 years ago

Did you train the model by regression or ranking? It is normal for the latter.

发自我的iPhone

------------------ Original ------------------ From: Jun Zhang <notifications@github.com> Date: Thu,Nov 19,2020 6:07 PM To: zwx8981/UNIQUE <UNIQUE@noreply.github.com> Cc: Subscribed <subscribed@noreply.github.com> Subject: Re: [zwx8981/UNIQUE] Negative predicted score (#4)

Hi, I simply train on single koniq-10k dataset with mos between 1 -5 , I got negative predicted score on some images. Do you think that is possible?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

pencilzhang commented 3 years ago

I train the model by ranking. How to interprate negative scores?

zwx8981 commented 3 years ago

With no direct fitting to the MOS,the quality range of ranking-based training is learned by the optimization process itself. You do not need to care about the negative score as long as the monoticity is consistent with MOS 

发自我的iPhone

------------------ Original ------------------ From: Jun Zhang <notifications@github.com> Date: Thu,Nov 19,2020 7:04 PM To: zwx8981/UNIQUE <UNIQUE@noreply.github.com> Cc: Onionbao <411965697@qq.com>, Comment <comment@noreply.github.com> Subject: Re: [zwx8981/UNIQUE] Negative predicted score (#4)

I train the model by ranking. How to interprate negative scores?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

pencilzhang commented 3 years ago

@zwx8981 Thanks for your reply. But when you do testing, you may still get negative scores on unseen testing images. How to compare the quality of testing images with negative scores.

zwx8981 commented 3 years ago

A higher score indicates a better quality,for example, 1 is better than -1, and -1 is better than -2

发自我的iPhone

------------------ Original ------------------ From: Jun Zhang <notifications@github.com> Date: Thu,Nov 19,2020 9:47 PM To: zwx8981/UNIQUE <UNIQUE@noreply.github.com> Cc: Onionbao <411965697@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [zwx8981/UNIQUE] Negative predicted score (#4)

@zwx8981 Thanks for your reply. But when you do testing, you may still get negative scores on unseen testing images. How to compare the quality of testing images with negative scores.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

pencilzhang commented 3 years ago

@zwx8981 When testing on other datasets, no fine-tuning needed right?

zwx8981 commented 3 years ago

@pencilzhang While our method can directly train the model on multiple datasets, you can still evaluate the model in a cross-database setting, in which no data of the test dataset is incorporated into the training set.

pencilzhang commented 3 years ago

@zwx8981 Thanks! when you did learn by regression on multiple datasets, did you use continuous ranking annotation, fidelity loss, hinge loss? I am wondering contributions of each component in your training strategy.

zwx8981 commented 3 years ago

I use mse (l2 loss) loss for regression. Hinge loss is only used for uncertainty estimation,which is neither used in linear re-scaling nor binary labeling. Continuous ranking annotation cannot be used to learn by regression, where we use linearly re-scaled MOS instead.

发自我的iPhone

------------------ Original ------------------ From: Jun Zhang <notifications@github.com> Date: Mon,Nov 23,2020 6:27 PM To: zwx8981/UNIQUE <UNIQUE@noreply.github.com> Cc: Onionbao <411965697@qq.com>, Mention <mention@noreply.github.com> Subject: Re: [zwx8981/UNIQUE] Negative predicted score (#4)

@zwx8981 Thanks! when you did learn by regression on multiple datasets, did you use continuous ranking annotation, fidelity loss, hinge loss? I am wondering contributions of each component in your training strategy.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

pencilzhang commented 3 years ago

@zwx8981 Sorry I should indicate that for continuous ranking and newly introduced loss, are they beneficial to learn by regression on one single dataset?

zwx8981 commented 3 years ago

@pencilzhang On one single dataset, I think the fidelity loss may not brings much (if any) benefit.

pencilzhang commented 3 years ago

@zwx8981 Thank you very much for your explanation. Expect the interpretation of negative quality values predicted by ranking network, I think I have already understood other things.:)