pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.26k stars 6.96k forks source link

NotImplementedError: Dilation > 1 not supported in BasicBlock on Resnet #2121

Open WaterKnight1998 opened 4 years ago

WaterKnight1998 commented 4 years ago

🐛 Bug

I tried to obtain a deeplabv3_resnet34 version based on the code that you are using for getting the resnet50 and resnet101 version

To Reproduce

def deeplabv3_resnet34(pretrained=False, progress=True,
                       num_classes=21, aux_loss=None, **kwargs):
    """Constructs a DeepLabV3 model with a ResNet-34 backbone.

    Args:
        pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
            contains the same classes as Pascal VOC
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _load_model('deeplabv3', 'resnet34', pretrained, progress, num_classes, aux_loss, **kwargs)

Expected behavior

I expected it to being created correctly. However, it throws the next error:

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-21-71ba71ca9be2> in <module>
----> 1 model=deeplabv3_resnet34(pretrained=False,num_classes=2)
      2 model.train()

~/Documents/TFG/seg/models/torchvision.py in deeplabv3_resnet34(pretrained, progress, num_classes, aux_loss, **kwargs)
     91         progress (bool): If True, displays a progress bar of the download to stderr
     92     """
---> 93     return _load_model('deeplabv3', 'resnet34', pretrained, progress, num_classes, aux_loss, **kwargs)
     94 
     95 def deeplabv3_resnet50(pretrained=False, progress=True,

~/Documents/TFG/seg/models/torchvision.py in _load_model(arch_type, backbone, pretrained, progress, num_classes, aux_loss, **kwargs)
     45     if pretrained:
     46         aux_loss = True
---> 47     model = _segm_resnet(arch_type, backbone, num_classes, aux_loss, **kwargs)
     48     if pretrained:
     49         arch = arch_type + '_' + backbone + '_coco'

~/Documents/TFG/seg/models/torchvision.py in _segm_resnet(name, backbone_name, num_classes, aux, pretrained_backbone)
     18     backbone = resnet.__dict__[backbone_name](
     19         pretrained=pretrained_backbone,
---> 20         replace_stride_with_dilation=[False, True, True])
     21 
     22     return_layers = {'layer4': 'out'}

~/anaconda3/envs/seg/lib/python3.7/site-packages/torchvision/models/resnet.py in resnet34(pretrained, progress, **kwargs)
    247     """
    248     return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,
--> 249                    **kwargs)
    250 
    251 

~/anaconda3/envs/seg/lib/python3.7/site-packages/torchvision/models/resnet.py in _resnet(arch, block, layers, pretrained, progress, **kwargs)
    218 
    219 def _resnet(arch, block, layers, pretrained, progress, **kwargs):
--> 220     model = ResNet(block, layers, **kwargs)
    221     if pretrained:
    222         state_dict = load_state_dict_from_url(model_urls[arch],

~/anaconda3/envs/seg/lib/python3.7/site-packages/torchvision/models/resnet.py in __init__(self, block, layers, num_classes, zero_init_residual, groups, width_per_group, replace_stride_with_dilation, norm_layer)
    148                                        dilate=replace_stride_with_dilation[0])
    149         self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
--> 150                                        dilate=replace_stride_with_dilation[1])
    151         self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
    152                                        dilate=replace_stride_with_dilation[2])

~/anaconda3/envs/seg/lib/python3.7/site-packages/torchvision/models/resnet.py in _make_layer(self, block, planes, blocks, stride, dilate)
    191             layers.append(block(self.inplanes, planes, groups=self.groups,
    192                                 base_width=self.base_width, dilation=self.dilation,
--> 193                                 norm_layer=norm_layer))
    194 
    195         return nn.Sequential(*layers)

~/anaconda3/envs/seg/lib/python3.7/site-packages/torchvision/models/resnet.py in __init__(self, inplanes, planes, stride, downsample, groups, base_width, dilation, norm_layer)
     45             raise ValueError('BasicBlock only supports groups=1 and base_width=64')
     46         if dilation > 1:
---> 47             raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
     48         # Both self.conv1 and self.downsample layers downsample the input when stride != 1
     49         self.conv1 = conv3x3(inplanes, planes, stride)

NotImplementedError: Dilation > 1 not supported in BasicBlock

cc @vfdev-5

pmeier commented 4 years ago

Related: #2115

WaterKnight1998 commented 4 years ago

Related: #2115

How can i get pytorch getting updated with that addition???

pmeier commented 4 years ago

Depends on how you have installed torchvision in the first place:

If you have installed from a pre-built binary with pip or conda you can't get this right now. If #2115 is merged you can install the nightly versions.

If you have installed from source you can simply perform the fix yourself locally and built afterwards. You can also clone from the fork / branch the author used for his PR and build from that. If you do that keep in mind that you will only get updates if the author keeps his fork up to date.

vincentqb commented 4 years ago

It looks like #2115 would fix this, is that correct?

WaterKnight1998 commented 4 years ago

It looks like #2115 would fix this, is that correct?

I tried installing from his pull and I got the exact same error!

pmeier commented 4 years ago

@vincentqb

It looks like #2115 would fix this, is that correct?

In the current state probably not. As @fmassa has stated in https://github.com/pytorch/vision/pull/2115#issuecomment-617193790 this will require more work.

vincentqb commented 4 years ago

I see, thanks for pointing out the comment :)