xiadingZ / video-caption.pytorch

pytorch implementation of video captioning
MIT License
401 stars 130 forks source link

size mismatch #53

Open baiyunfan123 opened 2 years ago

baiyunfan123 commented 2 years ago

D:\Anaconda\envs\vp12\lib\site-packages\torch\nn\modules\rnn.py:51: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1 "num_layers={}".format(dropout, num_layers)) D:\Anaconda\envs\vp12\lib\site-packages\torch\nn_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. warnings.warn(warning.format(ret)) D:\Anaconda\envs\vp12\lib\site-packages\torch\optim\lr_scheduler.py:82: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you s hould call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedul e.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) Traceback (most recent call last): File "train.py", line 133, in main(opt) File "train.py", line 120, in main train(dataloader, model, crit, optimizer, exp_lr_scheduler, opt, rl_crit) File "train.py", line 40, in train seqprobs, = model(fc_feats, labels, 'train') File "D:\Anaconda\envs\vp12\lib\site-packages\torch\nn\modules\module.py", line 547, in call result = self.forward(*input, kwargs) File "D:\video-caption\code12pytorch\video-caption.pytorch-master\models\S2VTAttModel.py", line 28, in forward encoder_outputs, encoder_hidden = self.encoder(vid_feats) File "D:\Anaconda\envs\vp12\lib\site-packages\torch\nn\modules\module.py", line 547, in call result = self.forward(*input, *kwargs) File "D:\video-caption\code12pytorch\video-caption.pytorch-master\models\EncoderRNN.py", line 53, in forward vid_feats = self.vid2hid(vid_feats.view(-1, dim_vid)) File "D:\Anaconda\envs\vp12\lib\site-packages\torch\nn\modules\module.py", line 547, in call result = self.forward(input, kwargs) File "D:\Anaconda\envs\vp12\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "D:\Anaconda\envs\vp12\lib\site-packages\torch\nn\functional.py", line 1369, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: size mismatch, m1: [12000 x 2048], m2: [4096 x 512] at C:/w/1/s/tmp_conda_3.7_055457/conda/conda-bld/pytorch_1565416617654/work/aten/src\THC/generic/THCTensorMathBlas.cu: 273 How to resolve size mismatches?I can't find a place to set parameters in a convolutional layer