Elin24 / PSPL

The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"
40 stars 13 forks source link
self-paced-learning super-resolution

Pixel-level Self-Paced Learning for Super-Resolution

This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resolution, which has been accepted by ICASSP 2020.

[arxiv][PDF]

trained model files: Baidu Pan(code: v0be)

Requirements

This code is forked from thstkdgus35/EDSR-PyTorch. In the light of its README, following libraries are required:

Core Parts

pspl framework

Detail code can be found in Loss.forward, which can be simplified as:

# take L1 Loss as example

import torch
import torch.nn as nn
import torch.nn.functional as F
from . import pytorch_ssim

class Loss(nn.modules.loss._Loss):
    def __init__(self, spl_alpha, spl_beta, spl_maxVal):
        super(Loss, self).__init__()
        self.loss = nn.L1Loss()
        self.alpha = spl_alpha
        self.beta = spl_beta
        self.maxVal = spl_maxVal

    def forward(self, sr, hr, step):
        # calc sigma value
        sigma = self.alpha * step + self.beta
        # define gauss function
        gauss = lambda x: torch.exp(-((x+1) / sigma) ** 2) * self.maxVal
        # ssim value
        ssim = pytorch_ssim.ssim(hr, sr, reduction='none').detach()
        # calc attention weight
        weight = gauss(ssim).detach()
        nsr, nhr = sr * weight, hr * weight
        # calc loss
        lossval = self.loss(nsr, nhr)
        return lossval

the library pytorch_ssim is focked from Po-Hsun-Su/pytorch-ssim and rewrite some details for adopting it to our requirements.

Attention weight values change according to SSIM Index and steps: attention values

Citation

If you find this project useful for your research, please cite:

@inproceedings{lin2020pixel,
  title={Pixel-Level Self-Paced Learning For Super-Resolution}
  author={Lin, Wei and Gao, Junyu and Wang, Qi and Li, Xuelong},
  booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2020},
  pages={2538-2542}
}