Open nkjulia opened 1 year ago
This is expected because you are using full-reference metrics: psnr
, lpips
. Full-reference metrics require a high quality reference image, and measures the difference to the reference. The distance between the same image is zero.
If you do not have high quality reference, you should use the no-reference metrics, you may list available no-reference metrics with:
pyiqa.list_models(metric_mode='NR')
This is expected because you are using full-reference metrics:
psnr
,lpips
. Full-reference metrics require a high quality reference image, and measures the difference to the reference. The distance between the same image is zero.If you do not have high quality reference, you should use the no-reference metrics, you may list available no-reference metrics with:
pyiqa.list_models(metric_mode='NR')
Hi, i have a question, what's the max and min value of pieApp in two reference and generated images? I've got -0.0334 for one of my experiments, is it ok? (i got -6.8 for two same images (img,img).
thanks i tried the NR metric, but i am wondering how to choose the best metric.any suggestion?
>>> pyiqa.list_models(metric_mode='NR') ['brisque', 'clipiqa', 'clipiqa+', 'clipiqa+_rn50_512', 'clipiqa+_vitL14_512', 'clipscore', 'cnniqa', 'dbcnn', 'entropy', 'fid', 'hyperiqa', 'ilniqe', 'maniqa', 'maniqa-kadid', 'maniqa-koniq', 'musiq', 'musiq-ava', 'musiq-koniq', 'musiq-paq2piq', 'musiq-spaq', 'nima', 'nima-vgg16-ava', 'niqe', 'nrqm', 'paq2piq', 'pi', 'tres', 'tres-flive', 'tres-koniq', 'uranker']
@shshojaei
That is ok for pieapp because it used an extra regression layer for the final results. And it makes pieapp output higher results for good image. There is no mathematical bound for the results of pieapp.
@nkjulia You may refer to the benchmark results under tests dir for performance of these metrics under different benchmarks and choose the one you need.
I would recommend clipiqa+ if you have difficulties in selecting suitable metric.
@nkjulia You may refer to the benchmark results under tests dir for performance of these metrics under different benchmarks and choose the one you need.
I would recommend clipiqa+ if you have difficulties in selecting suitable metric.
thanks.is there any model introduction doc?
Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.
Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.
thx Great Job!!
Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.
Can these NR-IQA metrics be use to do image aesthetic evaluation?
The deep learning metrics are closely related with the training dataset. The AVA
dataset is the main aesthetic dataset now. And our toolbox has some models trained on AVA
. You may use musiq-ava
metric for aesthetic evaluation.
It is not a good practice to do aesthetic evaluation with metrics that are not trained on `AVA.
i try to use the example script to assess some images.but i got 0 for all the images.why?