Open kumarneelabh13 opened 5 years ago
This is not an error from torchsummary
, rather a lack of information on the functional API of torch
from torchsummary
.
You pointed out that Dropout does not change the shape of the input, which is correct. But you assume that the input to the Dropout
layer is the output of ReLU6-156
. Always check the ̀forward` implementation of the model ;)
In this case, there is an adaptive average pooling and a .reshape(x.shape[0], -1)
over here:
https://github.com/pytorch/vision/blob/master/torchvision/models/mobilenet.py#L155
There are not picked up by the hooking mechanism, hence your output!
On running this code, I got an error in the output shape of the Dropout layer. Here are the last few lines from the output :
The result for the Dropout layer is wrong because Dropout does not change the shape of the input as per the PyTorch docs.