dillondavis / RecurrentAttentionConvolutionalNeuralNetwork

http://openaccess.thecvf.com/content_cvpr_2017/papers/Fu_Look_Closer_to_CVPR_2017_paper.pdf
35 stars 16 forks source link

RuntimeError: grad can be implicitly created only for scalar outputs #4

Open heyongcs opened 6 years ago

heyongcs commented 6 years ago

when use batch_size=32 not 1, get this error info:

File "/mnt/workspace/py/RA_CNN/src/manager.py", line 163, in train self.do_epoch(epoch_idx, optimizer, optimize_class=optimize_class) File "/mnt/workspace/py/RA_CNN/src/manager.py", line 113, in do_epoch self.do_batch(optimizer, batch, label, optimize_class=optimize_class) File "/mnt/workspace/py/RA_CNN/src/manager.py", line 101, in do_batch self.criterion_rank(scores[i-1], scores[i], label).backward(retain_graph=retain_graph) File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 87, in backward grad_variables, create_graph = _make_grads(variables, grad_variables, create_graph) File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 35, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs

heyongcs commented 6 years ago

I modify the RankLoss Func in Manage.py:

    ps1 = F.softmax(scores1).gather(1, target.long().view(-1, 1))
    ps2 = F.softmax(scores2).gather(1, target.long().view(-1, 1))
    return torch.mean(torch.clamp(ps1 - ps2 + self.margin, min=0))

is right?

dillondavis commented 6 years ago

interesting did this work for you?