Closed Ellyuca closed 2 years ago
The score range depends on the IQA dataset used for training. In case of LIVE-IQA the ground truth values are DMOS values, thus the predicted values will be similar to DMOS. In other words more degraded images will have larger score values in this case. Certain datasets provide MOS values instead of DMOS in which case the above behavior will be reversed i.e. more degradation leads to lower score values. You need to make sure which IQA dataset you are using before performing inference.
Regarding the second question, giving same images for FR model should ideally give a score of zero (or close to zero) since the difference of the features of compared images is fed as input to the trained model (refer to line 50 in demo_score_FR.py file)
Hi @pavancm. Thank you so much for answering. Have a nice day.
Hi, thanks for this work. I want to use this metric to evaluate the quality of some image for my research, in both FR and NR mode. I was wondering what is the score range for FR and the score range for NR. How should we interpret this values? For examples, the NR score for churchandcapitol.bmp is 16.765732, but on image66.bmp which is distorted, the NR score is 56.64854 while using the same linear regressor LIVE for both. On the other hand, with CLIVE I obtain NR score 84.52191 for churchandcapitol.bmp and 23.473944 for image66.bmp.
Higher is better? Lower is better? I've noticed that using different linear regressor the output values are in different ranges.
I also tried to compute the FR between two identical images (churchandcapitol.bmp) and the results is -0.04409191 with CSIQ_FR linear regressor, and 1.8522606 with linear regressor LIVE_FR. Also, when I compare image 33.bmp with itself I obtain the same value of -0.04409191 with CSIQ_FR and 1.8522606 with LIVE_FR.
Could you please provide some insights on the interpretation of the scores?
Thank you for your time. Have a great day.