Open Chenlulu1993 opened 3 years ago
Hi, when I run " python demo.py demo.jpg ./configs/retinanet_pvt_v2_b5_fpn_1x_coco.py ./checkpoints/retinanet_pvt_v2_b5_fpn_1x_coco.pth', I got this error: KeyError: 'RetinaNet: "pvt_v2_b5: 'pretrained'"' How should I solve this problem?
I also have the same problem. Have you solved this yet?
I had the same issue, but fixed it by commenting/removing the definition for pretrained.
For example, in pvt_v2.py: class pvt_v2_b0(PyramidVisionTransformerV2): def init(self, **kwargs): super(pvt_v2_b0, self).init( patch_size=4, embed_dims=[32, 64, 160, 256], num_heads=[1, 2, 5, 8], mlp_ratios=[8, 8, 4, 4], qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[2, 2, 2, 2], sr_ratios=[8, 4, 2, 1], drop_rate=0.0, drop_path_rate=0.1, pretrained=kwargs['pretrained'])
replace it with: class pvt_v2_b0(PyramidVisionTransformerV2): def init(self, **kwargs): super(pvt_v2_b0, self).init( patch_size=4, embed_dims=[32, 64, 160, 256], num_heads=[1, 2, 5, 8], mlp_ratios=[8, 8, 4, 4], qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[2, 2, 2, 2], sr_ratios=[8, 4, 2, 1], drop_rate=0.0, drop_path_rate=0.1)
You need to do the same modification if using other PVT model. Note: the above modification is just for evaluation. To train a model, you need to put it back. Otherwise, the training will start from scratch.
I removed this statement to ensure that I could reason. However, after I put it back, I still encountered the same mistake in training. Can you help me?
I removed this statement to ensure that I could reason. However, after I put it back, I still encountered the same mistake in training. Can you help me?
I am facing the same issue in training Deformable DETR with PVT, have you solved this?
Hi, when I run " python demo.py demo.jpg ./configs/retinanet_pvt_v2_b5_fpn_1x_coco.py ./checkpoints/retinanet_pvt_v2_b5_fpn_1x_coco.pth', I got this error: KeyError: 'RetinaNet: "pvt_v2_b5: \'pretrained\'"' How should I solve this problem?