Open dlmacedo opened 8 years ago
Dear Friends,
I am using the following code:
function units.agSReLU(input, weight, bias) local y = 0*weight+0*bias local output = torch.mul(torch.abs(input)+input, 0.5) return output end
I am calling the above function from:
local autogradFunc = autograd.nn.AutoModule('AutoGradSReLU') (units.agSReLU, initialWeight:clone(), initialBias:clone()) model:add(autogradFunc)
And the autograd is UPDATING the weight and bias!
Could anybody please axpleaning me what is going on?
David
Probably due to the optimization that you are using. If you are using 'weightDecay` (also known as L2 regularization) in SGD, it adds a contribution to the gradients which are a function of the weights.
Dear Friends,
I am using the following code:
I am calling the above function from:
And the autograd is UPDATING the weight and bias!
Could anybody please axpleaning me what is going on?
David