Cadene / pretrained-models.pytorch

Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
BSD 3-Clause "New" or "Revised" License
9.01k stars 1.83k forks source link

BNInception architecture #117

Open hokmund opened 5 years ago

hokmund commented 5 years ago

Seems like there is a mistake in BNInception architecture after the 29th Oct commit. I try to use its convolutional part as a pretrained model for transfer learning and get this during the forward pass:

RuntimeError: given groups=1, weight of size [64, 192, 1, 1], expected input[1, 64, 8, 8] to have 192 channels, but got 64 channels instead

Cadene commented 5 years ago

The forward pass of the BNInception has been tested and should be working on pytorch>=0.4.

What is your version of pretrainedmodels? Consider updating: pip install --upgrade pretrainedmodels

Bonsen commented 5 years ago

@Cadene He maybe uses fastai...

jaideep11061982 commented 5 years ago

hi i too face the above issue (inception_3a_1x1): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1)) (inception_3a_1x1_bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (inception_3a_relu_1x1): ReLU(inplace) (inception_3a_3x3_reduce): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1)) at this layer probabily Could you explain why do we have in chan=192 while we get only 64 from as out from BN layer before this one. I use new pytorch

jaideep11061982 commented 5 years ago

i did upgraded the models... I use fast ai ,point of failure is when it does model.eval() using a dummy batch of shape (1,c,h,w).Evaluation is done using std pytorch

jaideep11061982 commented 5 years ago

please ignore above layers here is error self.inception_3a_3x3_bn = nn.BatchNorm2d(64, affine=True) self.inception_3a_relu_3x3 = nn.ReLU (inplace) self.inception_3a_double_3x3_reduce = nn.Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1))

Batch norm would return only 64 channels of certain sizes ,reduce layer needs 192 channels