twitter-archive / torch-autograd

Autograd automatically differentiates native Torch code
Apache License 2.0
560 stars 114 forks source link

Autograd UPDATES weight and bias that DOESN'T CONTRIBUTE to the output of the module! #151

Open dlmacedo opened 8 years ago

dlmacedo commented 8 years ago

Dear Friends,

I am using the following code:

function units.agSReLU(input, weight, bias)
  local y = 0*weight+0*bias
  local output = torch.mul(torch.abs(input)+input, 0.5)
  return output
end

I am calling the above function from:

local autogradFunc = autograd.nn.AutoModule('AutoGradSReLU')
    (units.agSReLU, initialWeight:clone(), initialBias:clone())
model:add(autogradFunc)

And the autograd is UPDATING the weight and bias!

Could anybody please axpleaning me what is going on?

David

fmassa commented 8 years ago

Probably due to the optimization that you are using. If you are using 'weightDecay` (also known as L2 regularization) in SGD, it adds a contribution to the gradients which are a function of the weights.