sergeytulyakov / mocogan

MoCoGAN: Decomposing Motion and Content for Video Generation
573 stars 113 forks source link

Output size is too small. #17

Closed bragilee closed 5 years ago

bragilee commented 5 years ago

Hi, Thank you so much for your work. I have tried to run the mocogan with my own data. I have prepared for the data as the format in 'actions' folder, only make it with length of 8. But when I try to run it I encounter with some problems as followings. I am a bit confused that whether I have missed something?

Traceback (most recent call last): File "train.py", line 133, in trainer.train(generator, image_discriminator, video_discriminator) File "/data2/Runze/mocogan/src/trainers.py", line 271, in train self.video_batch_size, use_categories=self.use_categories) File "/data2/Runze/mocogan/src/trainers.py", line 170, in train_discriminator real_labels, realcategorical = discriminator(batch) File "/home/runzeli/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/data2/Runze/mocogan/src/models.py", line 180, in forward h, = super(CategoricalVideoDiscriminator, self).forward(input) File "/data2/Runze/mocogan/src/models.py", line 162, in forward h = self.main(input).squeeze() File "/home/runzeli/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, *kwargs) File "/home/runzeli/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/modules/container.py", line 67, in forward input = module(input) File "/home/runzeli/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(input, **kwargs) File "/home/runzeli/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 388, in forward self.padding, self.dilation, self.groups) File "/home/runzeli/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/functional.py", line 126, in conv3d return f(input, weight, bias) RuntimeError: Given input size: (128, 2, 16, 16). Calculated output size: (1, -1, 8, 8). Output size is too small.

Here is the command I run:

python train.py \ --image_batch 8 \ --video_batch 1 \ --use_infogan \ --use_noise \ --noise_sigma 0.1 \ --image_discriminator PatchImageDiscriminator \ --video_discriminator CategoricalVideoDiscriminator \ --print_every 100 \ --every_nth 1 \ --dim_z_content 50 \ --dim_z_motion 8 \ --dim_z_category 1 \ ../data/actions ../logs/actions

Thank you. :)

bragilee commented 5 years ago

Update:

modify layers in model.py can solve it.

Thanks. :)