alterzero / DBPN-Pytorch

The project is an official implement of our CVPR2018 paper "Deep Back-Projection Networks for Super-Resolution" (Winner of NTIRE2018 and PIRM2018)
https://alterzero.github.io/projects/DBPN.html
MIT License
565 stars 164 forks source link

Question in main.py #50

Open yangyingni opened 5 years ago

yangyingni commented 5 years ago
def train(epoch):
epoch_loss = 0
model.train()
for iteration, batch in enumerate(training_data_loader, 1):
       input, target, bicubic = Variable(batch[0]), Variable(batch[1]), Variable(batch[2]),

What does 'input, target, bicubic = Variable(batch[0]), Variable(batch[1]), Variable(batch[2])' mean? I tried to add some layers in your model,But it shows 'RuntimeError:The size of Tensor a(160) must match the size of tensor b(80) at non-singleton dimension 3'.I am confused.I will be thankful if you can answer me.Thanks.

danielkovacsdeak commented 3 years ago

Hi, Since this is in main.py it's about training. The training process uses the high resolution (HR) training images, downscales to get the low res (LR) input image, and upscales the input image to get bicubic. It doesn't do that with the full image, only a random patch of it. The patch size is a parameter of main.py (default is 40x40). The data loader picks a training HR image, downscales it by the scaling_factor to get the LR image. Then it extracts the patch from the LR image, and the corresponding larger patch from the HR image. It also upscales the small patch to get the bicubic (see dataset.py for details). The default batch is 1, so by default, it does the process with one single image in each iteration, but if you change the batch to larger numbers then obviously it repeats it that many times. In the code snippet you quoted, the data is passed from the data loader (batch) to these new variables (input, target, and bicubic). (It also turns the data to torch's Variable class instead of eg. a numpy array or whatever the original data format is.)