Elin24 / PSPL

The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"
40 stars 13 forks source link

Could you please specify the usage of your code? THX #3

Closed JustinAsdz closed 3 years ago

JustinAsdz commented 3 years ago

Hi Elin

Thanks for your work, recently I want to reproduce the result of your method. Can you specify the work-flow of your paper, for convenient usage for others to use.

Thanks !

Elin24 commented 3 years ago

There are 6 steps for an input LR image and its ground truth HR:

  1. use a super-resolution network to generate SR image;
  2. figure out the SSIM map between HR and SR, corresponding to the following code:
    ssim = pytorch_ssim.ssim(hr, sr, reduction='none').detach()
  3. use the gaussian function to generate attention map:
    # the meaning of sigma and maxVal can be found in my paper
    gauss = lambda x: torch.exp(-((x+1) / sigma) ** 2) * self.maxVal
    weight = gauss(ssim).detach()
  4. generate new HR and new SR image:
    nsr = sr * weight
    nhr = hr * weight
  5. adopt loss function like mse or mae to calculate final loss value:
    loss = mae(nsr, nhr) # use mae as example
  6. back propagation, which can be done by Pytorch use loss.backward()

PSPL is a very sample method to accelerate convergence, all its function can be represented by the code show in Core Part in README file.

JustinAsdz commented 3 years ago

Thanks for your quick reply,

Your Explanation is clear and percise, but can you show the steps to run the code for example : about dataset preparation or running file order. Thanks

Elin24 commented 3 years ago

I have explained it in issue 1, including preparing data and running the code in src. You can refer to it.

JustinAsdz commented 3 years ago

Thanks ,go to issue 1