Closed Gavin666Github closed 6 years ago
What is your input size ?
@hezhangsprinter Thank you for your reply! the input parameters are like this:
BTW,I generate the sample using 'create_train.py' ( download the NYU-depth @ http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat)
I mean, what is your input size? 512X512 or ???
@hezhangsprinter sorry, I use the dataset 'NYU-Depth' ,and I didn't change any size after run 'create_train.py', here I print(input.size()):
it's 224X224.
As shown in the above figure,the input parameters are like this:
python3 train.py --dataroot ./facades/train --valDataroot ./facades/test --exp ./checkpoints_new Namespace(annealEvery=400, annealStart=0, batchSize=1, beta1=0.5, dataroot='./facades/train', dataset='pix2pix', display=5, evalIter=50, exp='./checkpoints_new', imageSize=256, inputChannelSize=3, lambdaGAN=0.35, lambdaIMG=1, lrD=0.0002, lrG=0.0002, mode='B2A', ndf=64, netD='', netG='', ngf=64, niter=400, originalSize=286, outputChannelSize=3, poolSize=50, valBatchSize=150, valDataroot='./facades/test', wd=0.0, workers=1)
Do you mean that I can adjust the input size and retrain,but where to modify this parameter?
Thank you again. I look forward to your reply.
You should use size 512x512 for input or any size which is the multiplier of 128 (or 64). Please go over my model to check the reason.
Thank you very much.
Traceback (most recent call last): File "train.py", line 286, in <module> x_hat, tran_hat, atp_hat, dehaze21 = netG(input) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/MyProject/python/image_inpainting/De-haze/DCPDN/models/dehaze22.py", line 696, in forward atp= self.atp_est(x) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/MyProject/python/image_inpainting/De-haze/DCPDN/models/dehaze22.py", line 473, in forward out7 = self.layer7(out6) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/modules/batchnorm.py", line 66, in forward exponential_average_factor, self.eps) File "/home/gavin/.local/lib/python3.5/site-packages/torch/nn/functional.py", line 1251, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size [1, 64, 1, 1] Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f0b2d4afdd8>> Traceback (most recent call last): File "/home/gavin/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 399, in __del__ File "/home/gavin/.local/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers File "/usr/lib/python3.5/multiprocessing/queues.py", line 345, in get File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 887, in _find_spec TypeError: 'NoneType' object is not iterable
Is anybody experiencing this problem? batchSize was set to 1.