I trained the model with vgg16, and loaded it for detecting images,but when I executed the demo.py, an error arised:
RuntimeError: Error(s) in loading state_dict for vgg16:
size mismatch for RCNN_rpn.RPN_cls_score.weight: copying a param with shape torch.Size([24, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 512, 1, 1]).
size mismatch for RCNN_rpn.RPN_cls_score.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([18]).
size mismatch for RCNN_rpn.RPN_bbox_pred.weight: copying a param with shape torch.Size([48, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([36, 512, 1, 1]).
size mismatch for RCNN_rpn.RPN_bbox_pred.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([36]).
size mismatch for RCNN_cls_score.weight: copying a param with shape torch.Size([3, 4096]) from checkpoint, the shape in current model is torch.Size([21, 4096]).
size mismatch for RCNN_cls_score.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([21]).
size mismatch for RCNN_bbox_pred.weight: copying a param with shape torch.Size([12, 4096]) from checkpoint, the shape in current model is torch.Size([84, 4096]).
size mismatch for RCNN_bbox_pred.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([84]).
The command is : python demo.py --dataset coco --cfg cfgs/vgg16.yml --net vgg16 --checksession 1 --checkepoch 1 --checkpoint 43594 --cuda --load_dir outputs/vgg16
I will be really appreciated if someone could help me.
I knew how to work it out, modify the lib/model/utils/config.py line 295 as __C.ANCHOR_SCALES = [4,8,16,32],then the demo.py could be run successfully.
I trained the model with vgg16, and loaded it for detecting images,but when I executed the demo.py, an error arised: RuntimeError: Error(s) in loading state_dict for vgg16: size mismatch for RCNN_rpn.RPN_cls_score.weight: copying a param with shape torch.Size([24, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 512, 1, 1]). size mismatch for RCNN_rpn.RPN_cls_score.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([18]). size mismatch for RCNN_rpn.RPN_bbox_pred.weight: copying a param with shape torch.Size([48, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([36, 512, 1, 1]). size mismatch for RCNN_rpn.RPN_bbox_pred.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([36]). size mismatch for RCNN_cls_score.weight: copying a param with shape torch.Size([3, 4096]) from checkpoint, the shape in current model is torch.Size([21, 4096]). size mismatch for RCNN_cls_score.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([21]). size mismatch for RCNN_bbox_pred.weight: copying a param with shape torch.Size([12, 4096]) from checkpoint, the shape in current model is torch.Size([84, 4096]). size mismatch for RCNN_bbox_pred.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([84]). The command is : python demo.py --dataset coco --cfg cfgs/vgg16.yml --net vgg16 --checksession 1 --checkepoch 1 --checkpoint 43594 --cuda --load_dir outputs/vgg16 I will be really appreciated if someone could help me.