mattans / AgeProgression

133 stars 39 forks source link

ValueError #15

Open yangyang-nus-lv opened 5 years ago

yangyang-nus-lv commented 5 years ago

Hi, when I run this code I got an ValueError, shown as below:

Traceback (most recent call last): File "main.py", line 129, in models_saving=args.models_saving File "/home/yang/Documents/code/CAAE/pytorch/model.py", line 417, in teach d_z_prior = self.Dz(z_prior) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, kwargs) File "/home/yang/Documents/code/CAAE/pytorch/model.py", line 96, in forward out = layer(out) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, *kwargs) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(input, kwargs) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 66, in forward exponential_average_factor, self.eps) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1251, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size [1, 64]

Could this error be solved easily, since I didn't look through your full code so carefully. If it is a silly question, my fault ^_-

mattans commented 5 years ago
  1. what's your command?
  2. I think there's a similar thread here
  3. try changing the batch size
yangyang-nus-lv commented 5 years ago

I just used default values and run tag v1.1 with the command: python main.py --mode train --input ./data/UTKFace--output./results I still got the same error.

ArashHosseini commented 5 years ago

its works fine but only with 128 as batch size

Actasidiot commented 5 years ago

I solved the problem by setting the 'drop_last' as True in DataLoader in model.py. i.e., train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last = True) valid_loader = DataLoader(dataset=valid_dataset, batch_size=batch_size, shuffle=False, drop_last = True).

Hope it helpful for you.