kevinzakka / recurrent-visual-attention

A PyTorch Implementation of "Recurrent Models of Visual Attention"
MIT License
469 stars 124 forks source link

I tried to increasse the hidden units size from 128 to 256 but I am getting following error. #41

Open chatgptcoderhere opened 3 years ago

chatgptcoderhere commented 3 years ago

Epoch: 1/1500 - LR: 0.000300 0%| | 0/11044 [00:04<?, ?it/s] Traceback (most recent call last): File "main.py", line 49, in main(config) File "main.py", line 41, in main trainer.train() File "C:\Users\apatil\pyram - exp 2\pytorch_ram\trainer.py", line 164, in train train_loss, train_acc = self.train_one_epoch(epoch) File "C:\Users\apatil\pyram - exp 2\pytorch_ram\trainer.py", line 241, in train_one_epoch h_t, l_t, b_t, p = self.model(x, l_t, h_t) File "C:\tools\Anaconda3\envs\pytorch_ram\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, kwargs) File "C:\Users\apatil\pyram - exp 2\pytorch_ram\model.py", line 80, in forward h_t = self.rnn(g_t, h_t_prev) File "C:\tools\Anaconda3\envs\pytorch_ram\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "C:\Users\apatil\pyram - exp 2\pytorch_ram\modules.py", line 226, in forward h1 = self.i2h(g_t) File "C:\tools\Anaconda3\envs\pytorch_ram\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "C:\tools\Anaconda3\envs\pytorch_ram\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "C:\tools\Anaconda3\envs\pytorch_ram\lib\site-packages\torch\nn\functional.py", line 1690, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x512 and 256x256)

(pytorch_ram) C:\Users\apatil\pyram - exp 2\pytorch_ram>