QMoQ / OAPT

2 stars 0 forks source link

OAPT: Offset-Aware Partition Transformer for Double JPEG Artifacts Removal

[Qiao Mo](), [Yukang Ding](), [Jinhua Hao](), [Qiang Zhu](), [Ming Sun](), [Chao Zhou](), [Feiyu Chen](), [Shuyuan Zhu]()

UESTC, Kuaishou Techonology

Official implement of OAPT in ECCV2024, which is a transformer-based network deigned for double (or multiple) compressed image restoration.

Paper Link


TODO List


Architecture

architecture

Pattern clustering & inv operation

pattern clustering

Experimental results on gray double JPEG images

results

Visual results

gray visual results

Training details

Model(Gray) Params(M) Multi-Adds(G) TrainingSets Pretrain model iterations
SwinIR 11.49 293.42 DF2K 006_CAR_DFWB_s126w7_SwinIR-M_jpeg10 200k
HAT-S 9.24 227.14 DF2K HAT-S_SRx2 800k
ART 16.14 415.51 DF2K CAR_ART_q10 200k
OAPT 12.96 293.60 DF2K 006_CAR_DFWB_s126w7_SwinIR-M_jpeg10 200k

Setup

This project is mainly based on swinir and hat. All the weights are put in 'Baidu Netdisk' and 'Gdrive'

The version of PyTorch we used is 1.7.0.

pip install -r requirements.txt
python setup.py develop

Test

CUDA_VISIBLE_DEVICES=0 python oapt/test.py -opt ./options/Gray/test/test_oapt.yml

Train

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port=73 hat/train.py -opt options/Gray/train/train_oapt.yml --launcher pytorch