VITA-Group / EnlightenGAN

[IEEE TIP] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang
Other
890 stars 198 forks source link

training error(output size is too small) #52

Closed ZaynF closed 4 years ago

ZaynF commented 4 years ago

Hi, I always get this error every time I train,but I didn't change any parameters yet(only patch size & batch size). How can I resolve this?

model [SingleGANModel] was created Setting up a new session... create web directory ./checkpoints\enlightening\web... C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\functional.py:1890: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\functional.py:1961: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) Traceback (most recent call last): File "train.py", line 31, in model.optimize_parameters(epoch) File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\single_model.py", line 398, in optimize_parameters self.backward_G(epoch) File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\single_model.py", line 339, in backward_G self.fake_patch, self.input_patch) self.opt.vgg File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\networks.py", line 1028, in compute_vgg_loss img_fea = vgg(img_vgg, self.opt) File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\modules\module.py", line 477, in call result = self.forward(input, kwargs) File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\parallel\data_parallel.py", line 121, in forward return self.module(*inputs[0], *kwargs[0]) File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\modules\module.py", line 477, in call result = self.forward(input, kwargs) File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\networks.py", line 963, in forward h = F.max_pool2d(h, kernel_size=2, stride=2) File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\functional.py", line 396, in max_pool2d ret = torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode) RuntimeError: Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small at c:\programdata\miniconda3\conda-bld\pytorch_1533086652614\work\aten\src\thcunn\generic/SpatialDilatedMaxPooling.cu:69

yifanjiang19 commented 4 years ago

The input resolution should be larger than 256 x 256, otherwise, the maxpooling layer will downsample the feature map to the size less than 1x1.