This is an implementation of "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" (CVPR 2016 Oral Paper) in caffe.
VDSR (Very Deep network for Super-Resolution) is an end-to-end network with 20 convolutional layers for single image super-resolution. The performance of VDSR is better than other state-of-the-art SISR methods, such as SRCNN, A+ and CSCN (My implementation of CSCN).
Add multi-scale implementation (you can test different scales super-resolution in only one model). The model trained by myself yields similar performances with original paper!
Add codes of data augumentation.
Use Adam instead of SGD. 80 epochs is enough.
Recommend to use "VDSR_Official.mat" if you just want to do some test.
Place the "Train" folder into "($Caffe_Dir)/examples/", and rename "Train" to "VDSR"
Download training data from 291
Open MATLAB and direct to ($Caffe_Dir)/example/VDSR, run "data_aug.m" to do data augumentation, and then run "generate_train.m" and "generate_test.m" to generate training and test data (Code from SRCNN)
To train VDSR, run ./build/tools/caffe train --solver examples/VDSR/VDSR_solver.prototxt
After training, run "caffemodel2mat.m" to convert caffemodel to mat for testing (matcaffe is required)
"Demo_SR_Conv.m" is a simple test code. Just run it and you will get the result
"VDSR_Adam.mat" is a model trained by myself
"VDSR_Official.mat" is an official model converted from Official Test Code
DataSet | VDSR_Official | VDSR_Adam |
---|---|---|
Set5 | 37.53 | 37.58 |
Set14 | 33.03 | 33.00 |
BSD100 | 31.90 | 31.87 |
DataSet | VDSR_Official | VDSR_Adam |
---|---|---|
Set5 | 33.66 | 33.68 |
Set14 | 29.77 | 29.75 |
BSD100 | 28.82 | 28.80 |
DataSet | VDSR_Official | VDSR_Adam |
---|---|---|
Set5 | 31.35 | 31.33 |
Set14 | 28.01 | 27.95 |
BSD100 | 27.29 | 27.24 |
Please cite [1] and this repository if you use this code in your work, thank you!
[1] Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee, "Accurate Image Super-Resolution Using Very Deep Convolutional Networks", Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016