sanghyun-son / EDSR-PyTorch

PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)
MIT License
2.43k stars 669 forks source link

when using a bigger training set, just go Evaluation #74

Closed ffffyuan closed 5 years ago

ffffyuan commented 5 years ago

I want to train my own dataset, which consists of small patch size and the number is more than 1e6, then I find the process just evaluation but training, for example: python main.py --model EDSR --scale 2 --save EDSR_x2 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --dir_data ~/code/dataset --batch_size 1 --patch_size 64 --data_range 1-20000/20001-20010 20000 10 Making model... Preparing loss function: 1.000 * L1 [Epoch 1] Learning rate: 1.00e-4

Evaluation: 100%|███████████████████████████████████████████| 10/10 [00:01<00:00, 9.68it/s] [DIV2K x2] PSNR: 10.102 (Best: 10.102 @epoch 1) Total time: 1.04s

Please tell me why this happened?Thank you.

sanghyun-son commented 5 years ago

Hello.

Until now, please fix these lines as below:

self.repeat = max(args.test_every * args.batch_size // len(self.images_hr), 1)

I will fix this problem.

Thank you!