CSAILVision / semantic-segmentation-pytorch

Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset
http://sceneparsing.csail.mit.edu/
BSD 3-Clause "New" or "Revised" License
4.95k stars 1.1k forks source link

Runtime error: size mismatch #257

Open yamiefun opened 3 years ago

yamiefun commented 3 years ago

Hi, I'm running the google colab provided in README, and I want to subtitude the model to HRNetv2. I download the weights file for encoder and decoder from the link provided in README, and modify the net_encoder and net_decoder weights in the colab, but encounter some error. Here is the error message, do I need to modify anything else?

Loading weights for net_encoder

RuntimeError Traceback (most recent call last)

in () 4 fc_dim=2048, 5 #weights='ckpt/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth') ----> 6 weights='drive/MyDrive/corner/encoder_epoch_30.pth') 7 net_decoder = ModelBuilder.build_decoder( 8 arch='ppm_deepsup', 1 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 1222 if len(error_msgs) > 0: 1223 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( -> 1224 self.__class__.__name__, "\n\t".join(error_msgs))) 1225 return _IncompatibleKeys(missing_keys, unexpected_keys) 1226 RuntimeError: Error(s) in loading state_dict for ResnetDilated: size mismatch for layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]). size mismatch for layer1.0.downsample.0.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).