Open jvanheugten opened 5 years ago
I also met this problem!
Me too here!
Same problem , @sksq96
class TestMod(nn.Module):
def __init__(self, input_size, attention_size, eps=0.0):
super().__init__()
self.weight = nn.Parameter(torch.Tensor(attention_size, input_size))
nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5))
def forward(self, x):
return self.weight
test = TestMod(2,3)
# print(list(test.named_parameters()))
summary(test, ((1, 2)))
yeilds the following output:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
TestMod-1 [-1, 2] 6
================================================================
Total params: 6
Trainable params: 6
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.00
Estimated Total Size (MB): 0.00
----------------------------------------------------------------
The input size was passed in as ((1, 2))
, instead of [[2]]
, as the package currently assumes that a channel size is passed in as well. I hope that helps! Is the output of 6 parameters what you were expecting?
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.bias = nn.Parameter(torch.Tensor([0]))
def forward(self, x):
return x + self.bias
m = Model()
Model works for (10, 200, 200) input. But I can't get the summary using torchsummary. Could you help me?
I was trying to create a custom layer and check it with summary, however, it kept crashing. Here is a simple example for which the summary code crashes:
Output: