When building a custom layer, it turns out that the channel of self.gradInput can be larger than the input.
For example:
local myLayer, parent = torch.class('nn.MyLayer', 'nn.Module')
function myLayer:__init()
parent.__init(self)
end
function myLayer:updateOutput(input)
self.c = input:size(2)
self.output = input:repeatTensor(1, self.c*2, 1, 1)
return self.output
end
function myLayer:updateGradInput(input, gradOutput)
self.gradInput = gradOutput
return self.gradInput -- It works fine.
end
However, self.gradInput is twice the channel of input. Pytorch requires that self.gradInput must match the size of input, or it errors! Does Torch cut out self.gradInput to be the same size of input internal ? If so, how does it crop?
When building a custom layer, it turns out that the channel of
self.gradInput
can be larger than theinput
. For example:However,
self.gradInput
is twice the channel ofinput
. Pytorch requires thatself.gradInput
must match the size ofinput
, or it errors! Does Torch cut outself.gradInput
to be the same size ofinput
internal ? If so, how does it crop?