chaofengc / IQA-PyTorch

👁️ 🖼️ 🔥PyTorch Toolbox for Image Quality Assessment, including LPIPS, FID, NIQE, NRQM(Ma), MUSIQ, TOPIQ, NIMA, DBCNN, BRISQUE, PI and more...
https://iqa-pytorch.readthedocs.io/
Other
1.84k stars 165 forks source link

all result is 0 #86

Open nkjulia opened 1 year ago

nkjulia commented 1 year ago

i try to use the example script to assess some images.but i got 0 for all the images.why?

import pyiqa
import torch

# list all available metrics
print(pyiqa.list_models())

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

# create metric with default setting
iqa_metric = pyiqa.create_metric('lpips', device=device)
# Note that gradient propagation is disabled by default. set as_loss=True to enable it as a loss function.
iqa_metric = pyiqa.create_metric('lpips', device=device, as_loss=False)

# create metric with custom setting
#iqa_metric = pyiqa.create_metric('psnr', test_y_channel=True, color_space='ycbcr').to(device)

# check if lower better or higher better
print(iqa_metric.lower_better)

import os

for img in  ["images/"+k for k in os.listdir("images")]:
    score_fr = iqa_metric(img,img)
    print(img,score_fr)
chaofengc commented 1 year ago

This is expected because you are using full-reference metrics: psnr, lpips. Full-reference metrics require a high quality reference image, and measures the difference to the reference. The distance between the same image is zero.

If you do not have high quality reference, you should use the no-reference metrics, you may list available no-reference metrics with:

pyiqa.list_models(metric_mode='NR')
shshojaei commented 1 year ago

This is expected because you are using full-reference metrics: psnr, lpips. Full-reference metrics require a high quality reference image, and measures the difference to the reference. The distance between the same image is zero.

If you do not have high quality reference, you should use the no-reference metrics, you may list available no-reference metrics with:

pyiqa.list_models(metric_mode='NR')

Hi, i have a question, what's the max and min value of pieApp in two reference and generated images? I've got -0.0334 for one of my experiments, is it ok? (i got -6.8 for two same images (img,img).

nkjulia commented 1 year ago

thanks i tried the NR metric, but i am wondering how to choose the best metric.any suggestion?

>>> pyiqa.list_models(metric_mode='NR') ['brisque', 'clipiqa', 'clipiqa+', 'clipiqa+_rn50_512', 'clipiqa+_vitL14_512', 'clipscore', 'cnniqa', 'dbcnn', 'entropy', 'fid', 'hyperiqa', 'ilniqe', 'maniqa', 'maniqa-kadid', 'maniqa-koniq', 'musiq', 'musiq-ava', 'musiq-koniq', 'musiq-paq2piq', 'musiq-spaq', 'nima', 'nima-vgg16-ava', 'niqe', 'nrqm', 'paq2piq', 'pi', 'tres', 'tres-flive', 'tres-koniq', 'uranker']

chaofengc commented 1 year ago

@shshojaei

That is ok for pieapp because it used an extra regression layer for the final results. And it makes pieapp output higher results for good image. There is no mathematical bound for the results of pieapp.

chaofengc commented 1 year ago

@nkjulia You may refer to the benchmark results under tests dir for performance of these metrics under different benchmarks and choose the one you need.

I would recommend clipiqa+ if you have difficulties in selecting suitable metric.

nkjulia commented 1 year ago

@nkjulia You may refer to the benchmark results under tests dir for performance of these metrics under different benchmarks and choose the one you need.

I would recommend clipiqa+ if you have difficulties in selecting suitable metric.

thanks.is there any model introduction doc?

chaofengc commented 1 year ago

Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.

nkjulia commented 1 year ago

Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.

thx Great Job!!

nkjulia commented 1 year ago

Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.

Can these NR-IQA metrics be use to do image aesthetic evaluation?

chaofengc commented 1 year ago

The deep learning metrics are closely related with the training dataset. The AVA dataset is the main aesthetic dataset now. And our toolbox has some models trained on AVA. You may use musiq-ava metric for aesthetic evaluation.

It is not a good practice to do aesthetic evaluation with metrics that are not trained on `AVA.