Open rtqichen opened 8 years ago
Example code:
autograd = require 'autograd' f = function(x) return torch.add(x, -2, x) end df = autograd(f, {withForward = false, partialGrad = true}) x = torch.randn(3) df(x, torch.ones(3))
The last line throws an error .../DirectTape.lua:165: missing gradient for argument 3 in function torch.add.
.../DirectTape.lua:165: missing gradient for argument 3 in function torch.add
Not a big deal, as this use of torch.add can be decomposed into a mul followed by an add. Just thought I'd mention it.
torch.add
Thanks for the heads up. We tried to stay away from the more "exotic" call-signatures in torch, but I think there's no reason we couldn't accept a PR implementing this gradient.
Example code:
The last line throws an error
.../DirectTape.lua:165: missing gradient for argument 3 in function torch.add
.Not a big deal, as this use of
torch.add
can be decomposed into a mul followed by an add. Just thought I'd mention it.