facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.02k stars 7.41k forks source link

Loading checkpoint for ViTDet raises shape mismatch warning #4641

Open layadas opened 1 year ago

layadas commented 1 year ago

Instructions To Reproduce the Issue:

I installed Detectron2 and attempted to train the ViTDet base model from the documentation provided here: https://github.com/facebookresearch/detectron2/tree/main/projects/ViTDet

I have made no changes to the configs, or any other files.

The exact command I run is (copy-pasted from the page linked above): ../../tools/lazyconfig_train_net.py --config-file configs/path/to/config.py

Everything works smoothly, and the model also starts to train, but a lot of warning messages are generated, that point to different sizes of the norm.bias and norm.weight of the checkpoint and the model. The relevant part of the output log is:

[11/04 11:35:50 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://ImageNetPretrained/MAE/mae_pretrain_vit_base.pth ...
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_2.4.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_2.4.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_2.5.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_2.5.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_3.1.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_3.1.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_3.2.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_3.2.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_4.0.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_4.0.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_4.1.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_4.1.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_5.1.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_5.1.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of backbone.simfp_5.2.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of backbone.simfp_5.2.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv1.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv1.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv2.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv2.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv3.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv3.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv4.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.box_head.conv4.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn1.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn1.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn2.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn2.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn3.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn3.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn4.norm.bias in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([768]), while shape of roi_heads.mask_head.mask_fcn4.norm.weight in model is torch.Size([256]).
WARNING [11/04 11:35:50 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
[11/04 11:35:50 d2.checkpoint.c2_model_loading]: Following weights matched with submodule backbone.net:
| Names in Model        | Names in Checkpoint               | Shapes               |
|:----------------------|:----------------------------------|:---------------------|
| blocks.0.attn.proj.*  | blocks.0.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.0.attn.qkv.*   | blocks.0.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.0.mlp.fc1.*    | blocks.0.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.0.mlp.fc2.*    | blocks.0.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.0.norm1.*      | blocks.0.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.0.norm2.*      | blocks.0.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.1.attn.proj.*  | blocks.1.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.1.attn.qkv.*   | blocks.1.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.1.mlp.fc1.*    | blocks.1.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.1.mlp.fc2.*    | blocks.1.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.1.norm1.*      | blocks.1.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.1.norm2.*      | blocks.1.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.10.attn.proj.* | blocks.10.attn.proj.{bias,weight} | (768,) (768,768)     |
| blocks.10.attn.qkv.*  | blocks.10.attn.qkv.{bias,weight}  | (2304,) (2304,768)   |
| blocks.10.mlp.fc1.*   | blocks.10.mlp.fc1.{bias,weight}   | (3072,) (3072,768)   |
| blocks.10.mlp.fc2.*   | blocks.10.mlp.fc2.{bias,weight}   | (768,) (768,3072)    |
| blocks.10.norm1.*     | blocks.10.norm1.{bias,weight}     | (768,) (768,)        |
| blocks.10.norm2.*     | blocks.10.norm2.{bias,weight}     | (768,) (768,)        |
| blocks.11.attn.proj.* | blocks.11.attn.proj.{bias,weight} | (768,) (768,768)     |
| blocks.11.attn.qkv.*  | blocks.11.attn.qkv.{bias,weight}  | (2304,) (2304,768)   |
| blocks.11.mlp.fc1.*   | blocks.11.mlp.fc1.{bias,weight}   | (3072,) (3072,768)   |
| blocks.11.mlp.fc2.*   | blocks.11.mlp.fc2.{bias,weight}   | (768,) (768,3072)    |
| blocks.11.norm1.*     | blocks.11.norm1.{bias,weight}     | (768,) (768,)        |
| blocks.11.norm2.*     | blocks.11.norm2.{bias,weight}     | (768,) (768,)        |
| blocks.2.attn.proj.*  | blocks.2.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.2.attn.qkv.*   | blocks.2.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.2.mlp.fc1.*    | blocks.2.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.2.mlp.fc2.*    | blocks.2.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.2.norm1.*      | blocks.2.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.2.norm2.*      | blocks.2.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.3.attn.proj.*  | blocks.3.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.3.attn.qkv.*   | blocks.3.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.3.mlp.fc1.*    | blocks.3.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.3.mlp.fc2.*    | blocks.3.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.3.norm1.*      | blocks.3.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.3.norm2.*      | blocks.3.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.4.attn.proj.*  | blocks.4.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.4.attn.qkv.*   | blocks.4.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.4.mlp.fc1.*    | blocks.4.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.4.mlp.fc2.*    | blocks.4.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.4.norm1.*      | blocks.4.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.4.norm2.*      | blocks.4.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.5.attn.proj.*  | blocks.5.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.5.attn.qkv.*   | blocks.5.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.5.mlp.fc1.*    | blocks.5.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.5.mlp.fc2.*    | blocks.5.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.5.norm1.*      | blocks.5.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.5.norm2.*      | blocks.5.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.6.attn.proj.*  | blocks.6.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.6.attn.qkv.*   | blocks.6.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.6.mlp.fc1.*    | blocks.6.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.6.mlp.fc2.*    | blocks.6.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.6.norm1.*      | blocks.6.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.6.norm2.*      | blocks.6.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.7.attn.proj.*  | blocks.7.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.7.attn.qkv.*   | blocks.7.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.7.mlp.fc1.*    | blocks.7.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.7.mlp.fc2.*    | blocks.7.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.7.norm1.*      | blocks.7.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.7.norm2.*      | blocks.7.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.8.attn.proj.*  | blocks.8.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.8.attn.qkv.*   | blocks.8.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.8.mlp.fc1.*    | blocks.8.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.8.mlp.fc2.*    | blocks.8.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.8.norm1.*      | blocks.8.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.8.norm2.*      | blocks.8.norm2.{bias,weight}      | (768,) (768,)        |
| blocks.9.attn.proj.*  | blocks.9.attn.proj.{bias,weight}  | (768,) (768,768)     |
| blocks.9.attn.qkv.*   | blocks.9.attn.qkv.{bias,weight}   | (2304,) (2304,768)   |
| blocks.9.mlp.fc1.*    | blocks.9.mlp.fc1.{bias,weight}    | (3072,) (3072,768)   |
| blocks.9.mlp.fc2.*    | blocks.9.mlp.fc2.{bias,weight}    | (768,) (768,3072)    |
| blocks.9.norm1.*      | blocks.9.norm1.{bias,weight}      | (768,) (768,)        |
| blocks.9.norm2.*      | blocks.9.norm2.{bias,weight}      | (768,) (768,)        |
| patch_embed.proj.*    | patch_embed.proj.{bias,weight}    | (768,) (768,3,16,16) |
| pos_embed             | pos_embed                         | (1, 197, 768)        |
WARNING [11/04 11:35:50 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
backbone.net.blocks.0.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.1.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.10.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.11.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.2.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.3.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.4.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.5.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.6.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.7.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.8.attn.{rel_pos_h, rel_pos_w}
backbone.net.blocks.9.attn.{rel_pos_h, rel_pos_w}
backbone.simfp_2.0.{bias, weight}
backbone.simfp_2.1.{bias, weight}
backbone.simfp_2.3.{bias, weight}
backbone.simfp_2.4.norm.{bias, weight}
backbone.simfp_2.4.weight
backbone.simfp_2.5.norm.{bias, weight}
backbone.simfp_2.5.weight
backbone.simfp_3.0.{bias, weight}
backbone.simfp_3.1.norm.{bias, weight}
backbone.simfp_3.1.weight
backbone.simfp_3.2.norm.{bias, weight}
backbone.simfp_3.2.weight
backbone.simfp_4.0.norm.{bias, weight}
backbone.simfp_4.0.weight
backbone.simfp_4.1.norm.{bias, weight}
backbone.simfp_4.1.weight
backbone.simfp_5.1.norm.{bias, weight}
backbone.simfp_5.1.weight
backbone.simfp_5.2.norm.{bias, weight}
backbone.simfp_5.2.weight
proposal_generator.rpn_head.anchor_deltas.{bias, weight}
proposal_generator.rpn_head.conv.conv0.{bias, weight}
proposal_generator.rpn_head.conv.conv1.{bias, weight}
proposal_generator.rpn_head.objectness_logits.{bias, weight}
roi_heads.box_head.conv1.norm.{bias, weight}
roi_heads.box_head.conv1.weight
roi_heads.box_head.conv2.norm.{bias, weight}
roi_heads.box_head.conv2.weight
roi_heads.box_head.conv3.norm.{bias, weight}
roi_heads.box_head.conv3.weight
roi_heads.box_head.conv4.norm.{bias, weight}
roi_heads.box_head.conv4.weight
roi_heads.box_head.fc1.{bias, weight}
roi_heads.box_predictor.bbox_pred.{bias, weight}
roi_heads.box_predictor.cls_score.{bias, weight}
roi_heads.mask_head.deconv.{bias, weight}
roi_heads.mask_head.mask_fcn1.norm.{bias, weight}
roi_heads.mask_head.mask_fcn1.weight
roi_heads.mask_head.mask_fcn2.norm.{bias, weight}
roi_heads.mask_head.mask_fcn2.weight
roi_heads.mask_head.mask_fcn3.norm.{bias, weight}
roi_heads.mask_head.mask_fcn3.weight
roi_heads.mask_head.mask_fcn4.norm.{bias, weight}
roi_heads.mask_head.mask_fcn4.weight
roi_heads.mask_head.predictor.{bias, weight}
WARNING [11/04 11:35:50 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model:
  cls_token
  norm.{bias, weight}
[11/04 11:35:50 d2.engine.train_loop]: Starting training from iteration 0

It goes on to train the model, but I stop it because I am not sure if the model architecture is correct or not.

Expected behavior:

Since everything is untouched after installation, shape mismatches are should not occur. I am not clear why this is happening. If there is a way to change the model settings so that these warnings do not occur, then that would be of great help. Thank you.

Environment:

Paste the output of the following command:


----------------------  -------------------------------------------------------------------------
sys.platform            win32
Python                  3.8.13 (default, Oct 19 2022, 22:38:03) [MSC v.1916 64 bit (AMD64)]
numpy                   1.23.3
detectron2              0.6 @c:\ssl_imbalance\testproject2\detectron2\detectron2
Compiler                MSVC 192930141
CUDA compiler           not available
DETECTRON2_ENV_MODULE   <not set>
PyTorch                 1.12.1 @C:\APPS\Anaconda3\envs\TestProject2\lib\site-packages\torch
PyTorch debug build     False
GPU available           Yes
GPU 0,1,2               NVIDIA RTX A6000 (arch=8.6)
Driver version
CUDA_HOME               None - invalid!
Pillow                  9.2.0
torchvision             0.13.1 @C:\APPS\Anaconda3\envs\TestProject2\lib\site-packages\torchvision
torchvision arch flags  C:\APPS\Anaconda3\envs\TestProject2\lib\site-packages\torchvision\_C.pyd
fvcore                  0.1.5.post20220512
iopath                  0.1.9
cv2                     Not found
----------------------  -------------------------------------------------------------------------
PyTorch built with:
  - C++ Version: 199711
  - MSVC 192829337
  - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 2019
  - LAPACK is enabled (usually provided by MKL)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.3.2  (built against CUDA 11.5)
  - Magma 2.5.4
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/cb/pytorch_1000000000000/work/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,```
FedericoVasile1 commented 1 year ago

Hi, any news on this?

gugibugy commented 1 month ago

+1, have same question!