IceClear / CLIP-IQA

[AAAI 2023] Exploring CLIP for Assessing the Look and Feel of Images
Other
315 stars 18 forks source link

Code without OpenMMLab integration? #13

Closed justlike-prog closed 1 year ago

justlike-prog commented 1 year ago

@IceClear Any chance to get the code without using OpenMMLab? Would make it easier for experimenting. Thanks for the awesome work by the way.

justlike-prog commented 1 year ago

Or rather it would be nice to know which pytorch transforms one needs to reproduce the results given an image. The OpenMMLab transforms are quite cryptic and differ from the torchvision ones. Were the results of paper based on the mmcv transforms? Right now I am doing the following, but my results are scewed a bit:

transform = transforms.Compose([transforms.ToTensor(),
                            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                            ])

img = Image.open("./test.jpeg").convert("RGB")
img = transform(img)
img = img.unsqueeze(0)
IceClear commented 1 year ago

Hi, you may refer to IQA-pytorch, which also supports CLIP-IQA :) It should be easy to use.