twitter-archive / torch-autograd

Autograd automatically differentiates native Torch code
Apache License 2.0
560 stars 115 forks source link

torch.add(tensor1, scalar, tensor2) not supported. #155

Open rtqichen opened 8 years ago

rtqichen commented 8 years ago

Example code:

autograd = require 'autograd'
f = function(x)
    return torch.add(x, -2, x)
end
df = autograd(f, {withForward = false, partialGrad = true})
x = torch.randn(3)
df(x, torch.ones(3))

The last line throws an error .../DirectTape.lua:165: missing gradient for argument 3 in function torch.add.

Not a big deal, as this use of torch.add can be decomposed into a mul followed by an add. Just thought I'd mention it.

alexbw commented 8 years ago

Thanks for the heads up. We tried to stay away from the more "exotic" call-signatures in torch, but I think there's no reason we couldn't accept a PR implementing this gradient.