Closed Zheng222 closed 6 years ago
If you move output = self.upsample(x) list_out.append(output) and HR_2x.append(tmp) HR_4x.append(tmp) into the for statement and train the model, you will supervise each block in each branch. Finally you will have higher PSNR. This is an experimental function and was not mentioned in the paper.
Hello, the Fig.3 in your paper suggests that multi-supervised style as in LapSRN. I gain the result is much higher than that shown in the paper. Are the training datasets 291+General100 ? @opteroncx
I have update the checkpoint with div2k @Zheng222
Hello, I carefully read your code and has some questions as follows https://github.com/opteroncx/SESR/blob/157fa46a46d1e4b922446ee8adff7421bd5b09f0/main.py#L131-L137 https://github.com/opteroncx/SESR/blob/157fa46a46d1e4b922446ee8adff7421bd5b09f0/main.py#L107-L114
len(imglist)
in this position is equal to 1, so brachloss function turns into the plain criterion function. My doubt is why directly call criterion function likeloss_x2 = criterion(HR_2x[0], label_x2)
.SESR/model/model.pth
, and get the different result than what is described inreademe.md
file.I want to know how this checkpoint is obtained. Specifically, which is training dataset? Thank you, @opteroncx