ratishsp / data2text-plan-py

Code for AAAI 2019 paper on Data-to-Text Generation with Content Selection and Planning
163 stars 46 forks source link

After loss.backward(), why needs torch.autograd.backward(inputs, grads, retain_graph=retain_graph) #11

Closed LarryLee-BD closed 5 years ago

LarryLee-BD commented 5 years ago

Hi Ratish, I have a problem here. After __computeloss() and loss.div(normalization).backward(), why in function shards , there is _torch.autograd.backward(inputs, grads, retain_graph=retaingraph) ? is it usual we use loss to do backward job? why use inputs backward again?

ratishsp commented 5 years ago

Hi @LarryLee-BD, this repo is based upon a fork of OpenNMT-py. This thread https://github.com/OpenNMT/OpenNMT-py/issues/387 gives the details of the two backward() operations.