Closed TheIllusion closed 7 years ago
Hi,
Could you change
generator = torch.load(args["<model>"], map_location={'cuda:0': 'cpu'})
to
generator = torch.load(args["<model>"])
and see if it works?
It looks like some of the tensors are in gpu, but some are in cpu.
Thanks a lot for you quick reply. I think I'm almost there.
Sorry for another dumb question... I'm encountering the following new problem...
Could you guide me to avoid this?
It looks like the issues is with ffmpeg. If you change
pipe = sp.Popen(command, stdin=sp.PIPE, stderr=sp.PIPE)
to
pipe = sp.Popen(command, stdin=sp.PIPE)
you will be able to see more details about the error
Thanks a lot!
Hi TheIllusion,
Are you able to execute the code? I am not using docker as suggested. Using Python 3.6 and on ubuntu command prompt. Also opened an issue.
After doing the ffmpeg thing as told by you, I'm getting this error
Attached screenshot of the error https://drive.google.com/file/d/13AuoobWDDfAEC4yNQQpiRVllvu-NRjwt/view?usp=sharing @sergeytulyakov Please help.
After doing the ffmpeg thing as told by you, I'm getting this error
Attached screenshot of the error https://drive.google.com/file/d/13AuoobWDDfAEC4yNQQpiRVllvu-NRjwt/view?usp=sharing @sergeytulyakov Please help.
I know this might be late, but I hope it can still help others. For python3, modify
pipe = sp.Popen(command, stdin=sp.PIPE, stderr=sp.PIPE)
to
pipe = sp.Popen(command, stdin=sp.PIPE)
as stated by @sergeytulyakov And modify stdin.write to
pipe.communicate(video.tostring())
This fixed the broken pipe problem. Reference from this gist.
Dear the author of MocoGAN:
I am deeply impressed about your fantastic work. And I really appreciate that you've opened the source code of this project.
I have a small problem when using generate_video.py file. After I trained the model and run,
"python generate_videos.py --num_videos 10 --output_format gif --number_of_frames 16 ../logs/actions/generator_21700.pytorch output"
The following error occurrs:
Traceback (most recent call last): File "generatevideos.py", line 61, in
v, = generator.sample_videos(1, int(args['--number_of_frames']))
File "/mocogan/src/models.py", line 268, in sample_videos
z, z_category_labels = self.sample_z_video(num_samples, video_len)
File "/mocogan/src/models.py", line 259, in sample_z_video
z_motion = self.sample_z_m(num_samples, video_len)
File "/mocogan/src/models.py", line 224, in sample_z_m
h_t.append(self.recurrent(e_t, h_t[-1]))
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py", line 682, in forward
self.bias_ih, self.bias_hh,
File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py", line 49, in GRUCell
gi = F.linear(input, w_ih)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py", line 555, in linear
output = input.matmul(weight.t())
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 560, in matmul
return torch.matmul(self, other)
File "/usr/local/lib/python2.7/dist-packages/torch/functional.py", line 173, in matmul
return torch.mm(tensor1, tensor2)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 579, in mm
return Addmm.apply(output, self, matrix, 0, 1, True)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/blas.py", line 26, in forward
matrix1, matrix2, out=output)
TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, torch.FloatTensor, out=torch.cuda.FloatTensor), but expected one of:
(float beta, torch.cuda.FloatTensor source, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2, *, torch.cuda.FloatTensor out) didn't match because some of the arguments have invalid types: (int, torch.cuda.FloatTensor, int, torch.cuda.FloatTensor, torch.FloatTensor, out=torch.cuda.FloatTensor)
I think there must be some mistakes I made, but could you look into it give me any clue?