SFA code for the following papers:
Framework: Caffe 1.0 + MATLAB 2016b Interface
The PLSR model uesd in the test code is trained on LIVE gblur images with DMOS (the larger the worse). w
and best_layer
in the journal extension are determined by five cross-validation (See TMMinter.m
).
The ResNet-50-model.caffemodel
is downloaded from KaimingHe/deep-residual-networks and it should be pasted into the directory models/
before you run the code!
It's about 100MB which is too large to upload to this repo.
If you have difficulty, you can also download the ResNet-50-model.caffemodel
in my sharing on BaiduNetDisk with password u8sd
.
New! We provide the PyTorch implementation of the method in SFA-pytorch
All we need to train is a PLSR model, where the training function is plsregress
in MATLAB. The features are extracted from the DCNN models pre-trained on the image classification task.
Update: remember to change the value of "im_dir" and "im_lists" in data info.
You can download the datasets used in the papers from their owners for research purpose. If you have difficulty, you can refer to my sharing on BaiduNetDisk with password cu9j
. We only consider the blur related images in this work.
The reported Spearman correlation (SROCC) is multiplied by -1
when the training and testing datasets have different forms of subjective scores, i.e., one is MOS and the other is DMOS. This is to make sure that the prediction monotonicity is better when SROCC is closer to 1.
Please cite our papers if it helps your research:
@arcticle{li2018which,
title={Which Has Better Visual Quality: The Clear Blue Sky or a Blurry Animal?},
author={Li, Dingquan and Jiang, Tingting and Lin, Weisi and Jiang, Ming},
booktitle={IEEE Transactions on Multimedia},
volume={21},
number={5},
pages={1221-1234},
month={May},
year={2019},
doi={10.1109/TMM.2018.2875354}
}
[Paper]
@inproceedings{li2017exploiting,
title={Exploiting High-Level Semantics for No-Reference Image Quality Assessment of Realistic Blur Images},
author={Li, Dingquan and Jiang, Tingting and Jiang, Ming},
booktitle={Proceedings of the 2017 ACM on Multimedia Conference},
pages={378--386},
year={2017},
organization={ACM}
}
Dingquan Li, dingquanli AT pku DOT edu DOT cn.