Sidi Yang, Tianhe Wu, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang and Yujiu Yang
Tsinghua University Intelligent Interaction Group
:rocket: :rocket: :rocket: Updates:
This repository is the official PyTorch implementation of MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment. :fire::fire::fire: We won first place in the NTIRE2022 Perceptual Image Quality Assessment Challenge Track 2 No-Reference competition.
Ground Truth | Distortion 1 | Distortion 2 | Distortion 3 | Distortion 4 |
---|---|---|---|---|
MOS (GT) | 1539.1452 (1) | 1371.4593 (2) | 1223.4258 (3) | 1179.6223 (4) |
Ours (MANIQA) | 0.743674 (1) | 0.625845 (2) | 0.504243 (3) | 0.423222 (4) |
MOS (GT) | 4.33 (1) | 2.27 (2) | 1.33 (3) | 1.1 (4) |
Ours (MANIQA) | 0.8141 (1) | 0.2615 (2) | 0.0871 (3) | 0.0490 (4) |
Model: 0.3398 | Model: 0.2612 | Model: 0.3078 | Model: 0.3716 | Model: 0.3581 |
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference.
The PIPAL22 dataset is used in NTIRE22 competition and we test our model in PIPAL21.
We also conducted experiments on LIVE, CSIQ, TID2013 and KADID-10K datasets.
Attention:
Click into the website and download the pretrained model checkpoints, ignoring the source files (tag Koniq-10k has the latest source file). | Training Set | Testing Set | Checkpoints of MANIQA |
---|---|---|---|
PIPAL2022 dataset (200 reference images, 23200 distorted images, MOS scores for each distorted image) | [Validation] PIPAL2022 dataset (1650 distorted images) | download SRCC:0.686, PLCC:0.707 |
|
KADID-10K dataset (81 reference images and 10125 distorted images). 8000 distorted images for training | KADID-10K dataset. 2125 distorted images for testing | download SRCC:0.939, PLCC:0.939 |
|
KONIQ-10K dataset (in-the-wild database, consisting of 10,073 quality scored images). 8058 distorted images for training | KONIQ-10K dataset. 2015 distorted images for testing | download SRCC:0.930, PLCC:0.946 |
python train_maniqa.py
python predict_one_image.py
Generating the ouput file:
python inference.py
Python requirements can installed by:
pip install -r requirements.txt
@inproceedings{yang2022maniqa,
title={MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment},
author={Yang, Sidi and Wu, Tianhe and Shi, Shuwei and Lao, Shanshan and Gong, Yuan and Cao, Mingdeng and Wang, Jiahao and Yang, Yujiu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={1191--1200},
year={2022}
}
Our codes partially borrowed from anse3832 and timm. Thanks for the SwinIR Readme.md. We modify ours file like them.
[CVPRW 2021] Region-Adaptive Deformable Network for Image Quality Assessment (4th place in FR track)
[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network. (1th place in FR track)