xyzlancehe / DentexSegAndDet

20 stars 11 forks source link

pretrained weights issue while running train_diffdet.py #2

Open wathoresanket opened 11 months ago

wathoresanket commented 11 months ago

Hi,

Can you explain more on the pretrained weights files and what changes we need to make accordingly? I could only find the weights for the swin-transformer and I'm not able to find it for others. Even so, after putting the files in a checkpoints folder, it is giving errors while running train_diffdet.py file. Pls help.

Thanks

@xyzlancehe pls help

AlexanderPeter commented 11 months ago

Hi,

Can you explain more on the pretrained weights files and what changes we need to make accordingly? I could only find the weights for the swin-transformer and I'm not able to find it for others. Even so, after putting the files in a checkpoints folder, it is giving errors while running train_diffdet.py file. Pls help.

Thanks

@xyzlancehe pls help

Can you elaborate on the errors? Without a proper error message it is impossible to help.

vgthengane commented 11 months ago

Hi,

Can you explain more on the pretrained weights files and what changes we need to make accordingly? I could only find the weights for the swin-transformer and I'm not able to find it for others. Even so, after putting the files in a checkpoints folder, it is giving errors while running train_diffdet.py file. Pls help.

Thanks

@xyzlancehe pls help

You can download other weights from here - https://drive.google.com/drive/folders/1qD5m1NmK0kjE5hh-G17XUX751WsEG-h_

wathoresanket commented 11 months ago

Sorry for not updating the status. I've downaloded all the pre-trained weights (diffdet, dino, yolo) and I'm currently training them. It's taking time. Will let know here if any issue arrises.

wathoresanket commented 11 months ago

Hi, Can you explain more on the pretrained weights files and what changes we need to make accordingly? I could only find the weights for the swin-transformer and I'm not able to find it for others. Even so, after putting the files in a checkpoints folder, it is giving errors while running train_diffdet.py file. Pls help. Thanks @xyzlancehe pls help

Can you elaborate on the errors? Without a proper error message it is impossible to help.

I trained the model successfully but while running predict.py I'm getting unet error as follows: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.) return _VF.meshgrid(tensors, *kwargs) # type: ignore[attr-defined] Some model parameters or buffers are not found in the checkpoint: alphas_cumprod alphas_cumprod_prev backbone.bottom_up.norm0.{bias, weight} backbone.bottom_up.norm1.{bias, weight} backbone.bottom_up.norm2.{bias, weight} backbone.bottom_up.norm3.{bias, weight} backbone.fpn_lateral2.{bias, weight} backbone.fpn_lateral3.{bias, weight} backbone.fpn_lateral4.{bias, weight} backbone.fpn_lateral5.{bias, weight} backbone.fpn_output2.{bias, weight} backbone.fpn_output3.{bias, weight} backbone.fpn_output4.{bias, weight} backbone.fpn_output5.{bias, weight} betas head.head_series.0.bboxes_delta.{bias, weight} head.head_series.0.block_time_mlp.1.{bias, weight} head.head_series.0.class_logits.{bias, weight} head.head_series.0.cls_module.0.weight head.head_series.0.cls_module.1.{bias, weight} head.head_series.0.inst_interact.dynamic_layer.{bias, weight} head.head_series.0.inst_interact.norm1.{bias, weight} head.head_series.0.inst_interact.norm2.{bias, weight} head.head_series.0.inst_interact.norm3.{bias, weight} head.head_series.0.inst_interact.out_layer.{bias, weight} head.head_series.0.linear1.{bias, weight} head.head_series.0.linear2.{bias, weight} head.head_series.0.norm1.{bias, weight} head.head_series.0.norm2.{bias, weight} head.head_series.0.norm3.{bias, weight} head.head_series.0.reg_module.0.weight head.head_series.0.reg_module.1.{bias, weight} head.head_series.0.reg_module.3.weight head.head_series.0.reg_module.4.{bias, weight} head.head_series.0.reg_module.6.weight head.head_series.0.reg_module.7.{bias, weight} head.head_series.0.self_attn.out_proj.{bias, weight} head.head_series.0.self_attn.{in_proj_bias, in_proj_weight} head.head_series.1.bboxes_delta.{bias, weight} head.head_series.1.block_time_mlp.1.{bias, weight} head.head_series.1.class_logits.{bias, weight} head.head_series.1.cls_module.0.weight head.head_series.1.cls_module.1.{bias, weight} head.head_series.1.inst_interact.dynamic_layer.{bias, weight} head.head_series.1.inst_interact.norm1.{bias, weight} head.head_series.1.inst_interact.norm2.{bias, weight} head.head_series.1.inst_interact.norm3.{bias, weight} head.head_series.1.inst_interact.out_layer.{bias, weight} head.head_series.1.linear1.{bias, weight} head.head_series.1.linear2.{bias, weight} head.head_series.1.norm1.{bias, weight} head.head_series.1.norm2.{bias, weight} head.head_series.1.norm3.{bias, weight} head.head_series.1.reg_module.0.weight head.head_series.1.reg_module.1.{bias, weight} head.head_series.1.reg_module.3.weight head.head_series.1.reg_module.4.{bias, weight} head.head_series.1.reg_module.6.weight head.head_series.1.reg_module.7.{bias, weight} head.head_series.1.self_attn.out_proj.{bias, weight} head.head_series.1.self_attn.{in_proj_bias, in_proj_weight} head.head_series.2.bboxes_delta.{bias, weight} head.head_series.2.block_time_mlp.1.{bias, weight} head.head_series.2.class_logits.{bias, weight} head.head_series.2.cls_module.0.weight head.head_series.2.cls_module.1.{bias, weight} head.head_series.2.inst_interact.dynamic_layer.{bias, weight} head.head_series.2.inst_interact.norm1.{bias, weight} head.head_series.2.inst_interact.norm2.{bias, weight} head.head_series.2.inst_interact.norm3.{bias, weight} head.head_series.2.inst_interact.out_layer.{bias, weight} head.head_series.2.linear1.{bias, weight} head.head_series.2.linear2.{bias, weight} head.head_series.2.norm1.{bias, weight} head.head_series.2.norm2.{bias, weight} head.head_series.2.norm3.{bias, weight} head.head_series.2.reg_module.0.weight head.head_series.2.reg_module.1.{bias, weight} head.head_series.2.reg_module.3.weight head.head_series.2.reg_module.4.{bias, weight} head.head_series.2.reg_module.6.weight head.head_series.2.reg_module.7.{bias, weight} head.head_series.2.self_attn.out_proj.{bias, weight} head.head_series.2.self_attn.{in_proj_bias, in_proj_weight} head.head_series.3.bboxes_delta.{bias, weight} head.head_series.3.block_time_mlp.1.{bias, weight} head.head_series.3.class_logits.{bias, weight} head.head_series.3.cls_module.0.weight head.head_series.3.cls_module.1.{bias, weight} head.head_series.3.inst_interact.dynamic_layer.{bias, weight} head.head_series.3.inst_interact.norm1.{bias, weight} head.head_series.3.inst_interact.norm2.{bias, weight} head.head_series.3.inst_interact.norm3.{bias, weight} head.head_series.3.inst_interact.out_layer.{bias, weight} head.head_series.3.linear1.{bias, weight} head.head_series.3.linear2.{bias, weight} head.head_series.3.norm1.{bias, weight} head.head_series.3.norm2.{bias, weight} head.head_series.3.norm3.{bias, weight} head.head_series.3.reg_module.0.weight head.head_series.3.reg_module.1.{bias, weight} head.head_series.3.reg_module.3.weight head.head_series.3.reg_module.4.{bias, weight} head.head_series.3.reg_module.6.weight head.head_series.3.reg_module.7.{bias, weight} head.head_series.3.self_attn.out_proj.{bias, weight} head.head_series.3.self_attn.{in_proj_bias, in_proj_weight} head.head_series.4.bboxes_delta.{bias, weight} head.head_series.4.block_time_mlp.1.{bias, weight} head.head_series.4.class_logits.{bias, weight} head.head_series.4.cls_module.0.weight head.head_series.4.cls_module.1.{bias, weight} head.head_series.4.inst_interact.dynamic_layer.{bias, weight} head.head_series.4.inst_interact.norm1.{bias, weight} head.head_series.4.inst_interact.norm2.{bias, weight} head.head_series.4.inst_interact.norm3.{bias, weight} head.head_series.4.inst_interact.out_layer.{bias, weight} head.head_series.4.linear1.{bias, weight} head.head_series.4.linear2.{bias, weight} head.head_series.4.norm1.{bias, weight} head.head_series.4.norm2.{bias, weight} head.head_series.4.norm3.{bias, weight} head.head_series.4.reg_module.0.weight head.head_series.4.reg_module.1.{bias, weight} head.head_series.4.reg_module.3.weight head.head_series.4.reg_module.4.{bias, weight} head.head_series.4.reg_module.6.weight head.head_series.4.reg_module.7.{bias, weight} head.head_series.4.self_attn.out_proj.{bias, weight} head.head_series.4.self_attn.{in_proj_bias, in_proj_weight} head.head_series.5.bboxes_delta.{bias, weight} head.head_series.5.block_time_mlp.1.{bias, weight} head.head_series.5.class_logits.{bias, weight} head.head_series.5.cls_module.0.weight head.head_series.5.cls_module.1.{bias, weight} head.head_series.5.inst_interact.dynamic_layer.{bias, weight} head.head_series.5.inst_interact.norm1.{bias, weight} head.head_series.5.inst_interact.norm2.{bias, weight} head.head_series.5.inst_interact.norm3.{bias, weight} head.head_series.5.inst_interact.out_layer.{bias, weight} head.head_series.5.linear1.{bias, weight} head.head_series.5.linear2.{bias, weight} head.head_series.5.norm1.{bias, weight} head.head_series.5.norm2.{bias, weight} head.head_series.5.norm3.{bias, weight} head.head_series.5.reg_module.0.weight head.head_series.5.reg_module.1.{bias, weight} head.head_series.5.reg_module.3.weight head.head_series.5.reg_module.4.{bias, weight} head.head_series.5.reg_module.6.weight head.head_series.5.reg_module.7.{bias, weight} head.head_series.5.self_attn.out_proj.{bias, weight} head.head_series.5.self_attn.{in_proj_bias, in_proj_weight} head.time_mlp.1.{bias, weight} head.time_mlp.3.{bias, weight} log_one_minus_alphas_cumprod posterior_log_variance_clipped posterior_mean_coef1 posterior_mean_coef2 posterior_variance sqrt_alphas_cumprod sqrt_one_minus_alphas_cumprod sqrt_recip_alphas_cumprod sqrt_recipm1_alphas_cumprod The checkpoint state_dict contains keys that are not used by the model: layers.0.blocks.1.attn_mask layers.1.blocks.1.attn_mask layers.2.blocks.1.attn_mask layers.2.blocks.11.attn_mask layers.2.blocks.13.attn_mask layers.2.blocks.15.attn_mask layers.2.blocks.17.attn_mask layers.2.blocks.3.attn_mask layers.2.blocks.5.attn_mask layers.2.blocks.7.attn_mask layers.2.blocks.9.attn_mask norm.{bias, weight} head.{bias, weight} use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!! Traceback (most recent call last): File "/mnt/DATA/EE20B041/Desktop/Dentex_SegAndDet/predict.py", line 552, in main() File "/mnt/DATA/EE20B041/anaconda3/envs/dentex_env_1/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, **kwargs) File "/mnt/DATA/EE20B041/Desktop/Dentex_SegAndDet/predict.py", line 306, in main enumeration9_segmentation_models = load_unet("checkpoints/unet_9_last_epoch.pth", 10, cuda=cuda) File "/mnt/DATA/EE20B041/Desktop/Dentex_SegAndDet/models/unet/utils.py", line 49, in load_unet model.load_state_dict(model_state) File "/mnt/DATA/EE20B041/anaconda3/envs/dentex_env_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1667, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for UNet: Missing key(s) in state_dict: "encode_down_layer1.conv_block.double_conv_block.0.weight", "encode_down_layer1.conv_block.double_conv_block.0.bias", "encode_down_layer1.conv_block.double_conv_block.1.weight", "encode_down_layer1.conv_block.double_conv_block.1.bias", "encode_down_layer1.conv_block.double_conv_block.1.running_mean", "encode_down_layer1.conv_block.double_conv_block.1.running_var", "encode_down_layer1.conv_block.double_conv_block.3.weight", "encode_down_layer1.conv_block.double_conv_block.3.bias", "encode_down_layer1.conv_block.double_conv_block.4.weight", "encode_down_layer1.conv_block.double_conv_block.4.bias", "encode_down_layer1.conv_block.double_conv_block.4.running_mean", "encode_down_layer1.conv_block.double_conv_block.4.running_var", "encode_down_layer2.conv_block.double_conv_block.0.weight", "encode_down_layer2.conv_block.double_conv_block.0.bias", "encode_down_layer2.conv_block.double_conv_block.1.weight", "encode_down_layer2.conv_block.double_conv_block.1.bias", "encode_down_layer2.conv_block.double_conv_block.1.running_mean", "encode_down_layer2.conv_block.double_conv_block.1.running_var", "encode_down_layer2.conv_block.double_conv_block.3.weight", "encode_down_layer2.conv_block.double_conv_block.3.bias", "encode_down_layer2.conv_block.double_conv_block.4.weight", "encode_down_layer2.conv_block.double_conv_block.4.bias", "encode_down_layer2.conv_block.double_conv_block.4.running_mean", "encode_down_layer2.conv_block.double_conv_block.4.running_var", "encode_down_layer3.conv_block.double_conv_block.0.weight", "encode_down_layer3.conv_block.double_conv_block.0.bias", "encode_down_layer3.conv_block.double_conv_block.1.weight", "encode_down_layer3.conv_block.double_conv_block.1.bias", "encode_down_layer3.conv_block.double_conv_block.1.running_mean", "encode_down_layer3.conv_block.double_conv_block.1.running_var", "encode_down_layer3.conv_block.double_conv_block.3.weight", "encode_down_layer3.conv_block.double_conv_block.3.bias", "encode_down_layer3.conv_block.double_conv_block.4.weight", "encode_down_layer3.conv_block.double_conv_block.4.bias", "encode_down_layer3.conv_block.double_conv_block.4.running_mean", "encode_down_layer3.conv_block.double_conv_block.4.running_var", "encode_down_layer4.conv_block.double_conv_block.0.weight", "encode_down_layer4.conv_block.double_conv_block.0.bias", "encode_down_layer4.conv_block.double_conv_block.1.weight", "encode_down_layer4.conv_block.double_conv_block.1.bias", "encode_down_layer4.conv_block.double_conv_block.1.running_mean", "encode_down_layer4.conv_block.double_conv_block.1.running_var", "encode_down_layer4.conv_block.double_conv_block.3.weight", "encode_down_layer4.conv_block.double_conv_block.3.bias", "encode_down_layer4.conv_block.double_conv_block.4.weight", "encode_down_layer4.conv_block.double_conv_block.4.bias", "encode_down_layer4.conv_block.double_conv_block.4.running_mean", "encode_down_layer4.conv_block.double_conv_block.4.running_var", "bottom_layer.double_conv_block.0.weight", "bottom_layer.double_conv_block.0.bias", "bottom_layer.double_conv_block.1.weight", "bottom_layer.double_conv_block.1.bias", "bottom_layer.double_conv_block.1.running_mean", "bottom_layer.double_conv_block.1.running_var", "bottom_layer.double_conv_block.3.weight", "bottom_layer.double_conv_block.3.bias", "bottom_layer.double_conv_block.4.weight", "bottom_layer.double_conv_block.4.bias", "bottom_layer.double_conv_block.4.running_mean", "bottom_layer.double_conv_block.4.running_var", "decode_up_layer4.conv_block.double_conv_block.0.weight", "decode_up_layer4.conv_block.double_conv_block.0.bias", "decode_up_layer4.conv_block.double_conv_block.1.weight", "decode_up_layer4.conv_block.double_conv_block.1.bias", "decode_up_layer4.conv_block.double_conv_block.1.running_mean", "decode_up_layer4.conv_block.double_conv_block.1.running_var", "decode_up_layer4.conv_block.double_conv_block.3.weight", "decode_up_layer4.conv_block.double_conv_block.3.bias", "decode_up_layer4.conv_block.double_conv_block.4.weight", "decode_up_layer4.conv_block.double_conv_block.4.bias", "decode_up_layer4.conv_block.double_conv_block.4.running_mean", "decode_up_layer4.conv_block.double_conv_block.4.running_var", "decode_up_layer4.up_sample_conv.weight", "decode_up_layer4.up_sample_conv.bias", "decode_up_layer3.conv_block.double_conv_block.0.weight", "decode_up_layer3.conv_block.double_conv_block.0.bias", "decode_up_layer3.conv_block.double_conv_block.1.weight", "decode_up_layer3.conv_block.double_conv_block.1.bias", "decode_up_layer3.conv_block.double_conv_block.1.running_mean", "decode_up_layer3.conv_block.double_conv_block.1.running_var", "decode_up_layer3.conv_block.double_conv_block.3.weight", "decode_up_layer3.conv_block.double_conv_block.3.bias", "decode_up_layer3.conv_block.double_conv_block.4.weight", "decode_up_layer3.conv_block.double_conv_block.4.bias", "decode_up_layer3.conv_block.double_conv_block.4.running_mean", "decode_up_layer3.conv_block.double_conv_block.4.running_var", "decode_up_layer3.up_sample_conv.weight", "decode_up_layer3.up_sample_conv.bias", "decode_up_layer2.conv_block.double_conv_block.0.weight", "decode_up_layer2.conv_block.double_conv_block.0.bias", "decode_up_layer2.conv_block.double_conv_block.1.weight", "decode_up_layer2.conv_block.double_conv_block.1.bias", "decode_up_layer2.conv_block.double_conv_block.1.running_mean", "decode_up_layer2.conv_block.double_conv_block.1.running_var", "decode_up_layer2.conv_block.double_conv_block.3.weight", "decode_up_layer2.conv_block.double_conv_block.3.bias", "decode_up_layer2.conv_block.double_conv_block.4.weight", "decode_up_layer2.conv_block.double_conv_block.4.bias", "decode_up_layer2.conv_block.double_conv_block.4.running_mean", "decode_up_layer2.conv_block.double_conv_block.4.running_var", "decode_up_layer2.up_sample_conv.weight", "decode_up_layer2.up_sample_conv.bias", "decode_up_layer1.conv_block.double_conv_block.0.weight", "decode_up_layer1.conv_block.double_conv_block.0.bias", "decode_up_layer1.conv_block.double_conv_block.1.weight", "decode_up_layer1.conv_block.double_conv_block.1.bias", "decode_up_layer1.conv_block.double_conv_block.1.running_mean", "decode_up_layer1.conv_block.double_conv_block.1.running_var", "decode_up_layer1.conv_block.double_conv_block.3.weight", "decode_up_layer1.conv_block.double_conv_block.3.bias", "decode_up_layer1.conv_block.double_conv_block.4.weight", "decode_up_layer1.conv_block.double_conv_block.4.bias", "decode_up_layer1.conv_block.double_conv_block.4.running_mean", "decode_up_layer1.conv_block.double_conv_block.4.running_var", "decode_up_layer1.up_sample_conv.weight", "decode_up_layer1.up_sample_conv.bias", "output_conv.weight", "output_conv.bias". Unexpected key(s) in state_dict: "block_1_1_left.conv1.conv.weight", "block_1_1_left.conv1.conv.bias", "block_1_1_left.conv1.norm.gamma.conv1.weight", "block_1_1_left.conv1.norm.gamma.conv1.bias", "block_1_1_left.conv1.norm.gamma.conv2.weight", "block_1_1_left.conv1.norm.gamma.conv2.bias", "block_1_1_left.conv1.norm.beta.conv1.weight", "block_1_1_left.conv1.norm.beta.conv1.bias", "block_1_1_left.conv1.norm.beta.conv2.weight", "block_1_1_left.conv1.norm.beta.conv2.bias", "block_1_1_left.res_conv.conv.weight", "block_1_1_left.res_conv.conv.bias", "block_1_1_left.res_conv.norm.gamma.conv1.weight", "block_1_1_left.res_conv.norm.gamma.conv1.bias", "block_1_1_left.res_conv.norm.gamma.conv2.weight", "block_1_1_left.res_conv.norm.gamma.conv2.bias", "block_1_1_left.res_conv.norm.beta.conv1.weight", "block_1_1_left.res_conv.norm.beta.conv1.bias", "block_1_1_left.res_conv.norm.beta.conv2.weight", "block_1_1_left.res_conv.norm.beta.conv2.bias", "block_1_2_left.conv1.conv.weight", "block_1_2_left.conv1.conv.bias", "block_1_2_left.conv1.norm.gamma.conv1.weight", "block_1_2_left.conv1.norm.gamma.conv1.bias", "block_1_2_left.conv1.norm.gamma.conv2.weight", "block_1_2_left.conv1.norm.gamma.conv2.bias", "block_1_2_left.conv1.norm.beta.conv1.weight", "block_1_2_left.conv1.norm.beta.conv1.bias", "block_1_2_left.conv1.norm.beta.conv2.weight", "block_1_2_left.conv1.norm.beta.conv2.bias", "block_2_1_left.conv1.conv.weight", "block_2_1_left.conv1.conv.bias", "block_2_1_left.conv1.norm.gamma.conv1.weight", "block_2_1_left.conv1.norm.gamma.conv1.bias", "block_2_1_left.conv1.norm.gamma.conv2.weight", "block_2_1_left.conv1.norm.gamma.conv2.bias", "block_2_1_left.conv1.norm.beta.conv1.weight", "block_2_1_left.conv1.norm.beta.conv1.bias", "block_2_1_left.conv1.norm.beta.conv2.weight", "block_2_1_left.conv1.norm.beta.conv2.bias", "block_2_1_left.res_conv.conv.weight", "block_2_1_left.res_conv.conv.bias", "block_2_1_left.res_conv.norm.gamma.conv1.weight", "block_2_1_left.res_conv.norm.gamma.conv1.bias", "block_2_1_left.res_conv.norm.gamma.conv2.weight", "block_2_1_left.res_conv.norm.gamma.conv2.bias", "block_2_1_left.res_conv.norm.beta.conv1.weight", "block_2_1_left.res_conv.norm.beta.conv1.bias", "block_2_1_left.res_conv.norm.beta.conv2.weight", "block_2_1_left.res_conv.norm.beta.conv2.bias", "block_2_2_left.conv1.conv.weight", "block_2_2_left.conv1.conv.bias", "block_2_2_left.conv1.norm.gamma.conv1.weight", "block_2_2_left.conv1.norm.gamma.conv1.bias", "block_2_2_left.conv1.norm.gamma.conv2.weight", "block_2_2_left.conv1.norm.gamma.conv2.bias", "block_2_2_left.conv1.norm.beta.conv1.weight", "block_2_2_left.conv1.norm.beta.conv1.bias", "block_2_2_left.conv1.norm.beta.conv2.weight", "block_2_2_left.conv1.norm.beta.conv2.bias", "block_2_3_left.conv1.conv.weight", "block_2_3_left.conv1.conv.bias", "block_2_3_left.conv1.norm.gamma.conv1.weight", "block_2_3_left.conv1.norm.gamma.conv1.bias", "block_2_3_left.conv1.norm.gamma.conv2.weight", "block_2_3_left.conv1.norm.gamma.conv2.bias", "block_2_3_left.conv1.norm.beta.conv1.weight", "block_2_3_left.conv1.norm.beta.conv1.bias", "block_2_3_left.conv1.norm.beta.conv2.weight", "block_2_3_left.conv1.norm.beta.conv2.bias", "block_3_1_left.conv1.conv.weight", "block_3_1_left.conv1.conv.bias", "block_3_1_left.conv1.norm.gamma.conv1.weight", "block_3_1_left.conv1.norm.gamma.conv1.bias", "block_3_1_left.conv1.norm.gamma.conv2.weight", "block_3_1_left.conv1.norm.gamma.conv2.bias", "block_3_1_left.conv1.norm.beta.conv1.weight", "block_3_1_left.conv1.norm.beta.conv1.bias", "block_3_1_left.conv1.norm.beta.conv2.weight", "block_3_1_left.conv1.norm.beta.conv2.bias", "block_3_1_left.res_conv.conv.weight", "block_3_1_left.res_conv.conv.bias", "block_3_1_left.res_conv.norm.gamma.conv1.weight", "block_3_1_left.res_conv.norm.gamma.conv1.bias", "block_3_1_left.res_conv.norm.gamma.conv2.weight", "block_3_1_left.res_conv.norm.gamma.conv2.bias", "block_3_1_left.res_conv.norm.beta.conv1.weight", "block_3_1_left.res_conv.norm.beta.conv1.bias", "block_3_1_left.res_conv.norm.beta.conv2.weight", "block_3_1_left.res_conv.norm.beta.conv2.bias", "block_3_2_left.conv1.conv.weight", "block_3_2_left.conv1.conv.bias", "block_3_2_left.conv1.norm.gamma.conv1.weight", "block_3_2_left.conv1.norm.gamma.conv1.bias", "block_3_2_left.conv1.norm.gamma.conv2.weight", "block_3_2_left.conv1.norm.gamma.conv2.bias", "block_3_2_left.conv1.norm.beta.conv1.weight", "block_3_2_left.conv1.norm.beta.conv1.bias", "block_3_2_left.conv1.norm.beta.conv2.weight", "block_3_2_left.conv1.norm.beta.conv2.bias", "block_3_3_left.conv1.conv.weight", "block_3_3_left.conv1.conv.bias", "block_3_3_left.conv1.norm.gamma.conv1.weight", "block_3_3_left.conv1.norm.gamma.conv1.bias", "block_3_3_left.conv1.norm.gamma.conv2.weight", "block_3_3_left.conv1.norm.gamma.conv2.bias", "block_3_3_left.conv1.norm.beta.conv1.weight", "block_3_3_left.conv1.norm.beta.conv1.bias", "block_3_3_left.conv1.norm.beta.conv2.weight", "block_3_3_left.conv1.norm.beta.conv2.bias", "block_4_1_left.conv1.conv.weight", "block_4_1_left.conv1.conv.bias", "block_4_1_left.conv1.norm.gamma.conv1.weight", "block_4_1_left.conv1.norm.gamma.conv1.bias", "block_4_1_left.conv1.norm.gamma.conv2.weight", "block_4_1_left.conv1.norm.gamma.conv2.bias", "block_4_1_left.conv1.norm.beta.conv1.weight", "block_4_1_left.conv1.norm.beta.conv1.bias", "block_4_1_left.conv1.norm.beta.conv2.weight", "block_4_1_left.conv1.norm.beta.conv2.bias", "block_4_1_left.res_conv.conv.weight", "block_4_1_left.res_conv.conv.bias", "block_4_1_left.res_conv.norm.gamma.conv1.weight", "block_4_1_left.res_conv.norm.gamma.conv1.bias", "block_4_1_left.res_conv.norm.gamma.conv2.weight", "block_4_1_left.res_conv.norm.gamma.conv2.bias", "block_4_1_left.res_conv.norm.beta.conv1.weight", "block_4_1_left.res_conv.norm.beta.conv1.bias", "block_4_1_left.res_conv.norm.beta.conv2.weight", "block_4_1_left.res_conv.norm.beta.conv2.bias", "block_4_2_left.conv1.conv.weight", "block_4_2_left.conv1.conv.bias", "block_4_2_left.conv1.norm.gamma.conv1.weight", "block_4_2_left.conv1.norm.gamma.conv1.bias", "block_4_2_left.conv1.norm.gamma.conv2.weight", "block_4_2_left.conv1.norm.gamma.conv2.bias", "block_4_2_left.conv1.norm.beta.conv1.weight", "block_4_2_left.conv1.norm.beta.conv1.bias", "block_4_2_left.conv1.norm.beta.conv2.weight", "block_4_2_left.conv1.norm.beta.conv2.bias", "block_4_3_left.conv1.conv.weight", "block_4_3_left.conv1.conv.bias", "block_4_3_left.conv1.norm.gamma.conv1.weight", "block_4_3_left.conv1.norm.gamma.conv1.bias", "block_4_3_left.conv1.norm.gamma.conv2.weight", "block_4_3_left.conv1.norm.gamma.conv2.bias", "block_4_3_left.conv1.norm.beta.conv1.weight", "block_4_3_left.conv1.norm.beta.conv1.bias", "block_4_3_left.conv1.norm.beta.conv2.weight", "block_4_3_left.conv1.norm.beta.conv2.bias", "block_5_1_left.conv1.conv.weight", "block_5_1_left.conv1.conv.bias", "block_5_1_left.conv1.norm.gamma.conv1.weight", "block_5_1_left.conv1.norm.gamma.conv1.bias", "block_5_1_left.conv1.norm.gamma.conv2.weight", "block_5_1_left.conv1.norm.gamma.conv2.bias", "block_5_1_left.conv1.norm.beta.conv1.weight", "block_5_1_left.conv1.norm.beta.conv1.bias", "block_5_1_left.conv1.norm.beta.conv2.weight", "block_5_1_left.conv1.norm.beta.conv2.bias", "block_5_1_left.res_conv.conv.weight", "block_5_1_left.res_conv.conv.bias", "block_5_1_left.res_conv.norm.gamma.conv1.weight", "block_5_1_left.res_conv.norm.gamma.conv1.bias", "block_5_1_left.res_conv.norm.gamma.conv2.weight", "block_5_1_left.res_conv.norm.gamma.conv2.bias", "block_5_1_left.res_conv.norm.beta.conv1.weight", "block_5_1_left.res_conv.norm.beta.conv1.bias", "block_5_1_left.res_conv.norm.beta.conv2.weight", "block_5_1_left.res_conv.norm.beta.conv2.bias", "block_5_2_left.conv1.conv.weight", "block_5_2_left.conv1.conv.bias", "block_5_2_left.conv1.norm.gamma.conv1.weight", "block_5_2_left.conv1.norm.gamma.conv1.bias", "block_5_2_left.conv1.norm.gamma.conv2.weight", "block_5_2_left.conv1.norm.gamma.conv2.bias", "block_5_2_left.conv1.norm.beta.conv1.weight", "block_5_2_left.conv1.norm.beta.conv1.bias", "block_5_2_left.conv1.norm.beta.conv2.weight", "block_5_2_left.conv1.norm.beta.conv2.bias", "block_5_3_left.conv1.conv.weight", "block_5_3_left.conv1.conv.bias", "block_5_3_left.conv1.norm.gamma.conv1.weight", "block_5_3_left.conv1.norm.gamma.conv1.bias", "block_5_3_left.conv1.norm.gamma.conv2.weight", "block_5_3_left.conv1.norm.gamma.conv2.bias", "block_5_3_left.conv1.norm.beta.conv1.weight", "block_5_3_left.conv1.norm.beta.conv1.bias", "block_5_3_left.conv1.norm.beta.conv2.weight", "block_5_3_left.conv1.norm.beta.conv2.bias", "upconv_4.weight", "upconv_4.bias", "block_4_1_right.conv.weight", "block_4_1_right.conv.bias", "block_4_1_right.norm.gamma.conv1.weight", "block_4_1_right.norm.gamma.conv1.bias", "block_4_1_right.norm.gamma.conv2.weight", "block_4_1_right.norm.gamma.conv2.bias", "block_4_1_right.norm.beta.conv1.weight", "block_4_1_right.norm.beta.conv1.bias", "block_4_1_right.norm.beta.conv2.weight", "block_4_1_right.norm.beta.conv2.bias", "block_4_2_right.conv.weight", "block_4_2_right.conv.bias", "block_4_2_right.norm.gamma.conv1.weight", "block_4_2_right.norm.gamma.conv1.bias", "block_4_2_right.norm.gamma.conv2.weight", "block_4_2_right.norm.gamma.conv2.bias", "block_4_2_right.norm.beta.conv1.weight", "block_4_2_right.norm.beta.conv1.bias", "block_4_2_right.norm.beta.conv2.weight", "block_4_2_right.norm.beta.conv2.bias", "vision_4.conv.conv.weight", "vision_4.conv.conv.bias", "vision_4.conv.norm.gamma.conv1.weight", "vision_4.conv.norm.gamma.conv1.bias", "vision_4.conv.norm.gamma.conv2.weight", "vision_4.conv.norm.gamma.conv2.bias", "vision_4.conv.norm.beta.conv1.weight", "vision_4.conv.norm.beta.conv1.bias", "vision_4.conv.norm.beta.conv2.weight", "vision_4.conv.norm.beta.conv2.bias", "upconv_3.weight", "upconv_3.bias", "block_3_1_right.conv.weight", "block_3_1_right.conv.bias", "block_3_1_right.norm.gamma.conv1.weight", "block_3_1_right.norm.gamma.conv1.bias", "block_3_1_right.norm.gamma.conv2.weight", "block_3_1_right.norm.gamma.conv2.bias", "block_3_1_right.norm.beta.conv1.weight", "block_3_1_right.norm.beta.conv1.bias", "block_3_1_right.norm.beta.conv2.weight", "block_3_1_right.norm.beta.conv2.bias", "block_3_2_right.conv.weight", "block_3_2_right.conv.bias", "block_3_2_right.norm.gamma.conv1.weight", "block_3_2_right.norm.gamma.conv1.bias", "block_3_2_right.norm.gamma.conv2.weight", "block_3_2_right.norm.gamma.conv2.bias", "block_3_2_right.norm.beta.conv1.weight", "block_3_2_right.norm.beta.conv1.bias", "block_3_2_right.norm.beta.conv2.weight", "block_3_2_right.norm.beta.conv2.bias", "vision_3.conv.conv.weight", "vision_3.conv.conv.bias", "vision_3.conv.norm.gamma.conv1.weight", "vision_3.conv.norm.gamma.conv1.bias", "vision_3.conv.norm.gamma.conv2.weight", "vision_3.conv.norm.gamma.conv2.bias", "vision_3.conv.norm.beta.conv1.weight", "vision_3.conv.norm.beta.conv1.bias", "vision_3.conv.norm.beta.conv2.weight", "vision_3.conv.norm.beta.conv2.bias", "upconv_2.weight", "upconv_2.bias", "block_2_1_right.conv.weight", "block_2_1_right.conv.bias", "block_2_1_right.norm.gamma.conv1.weight", "block_2_1_right.norm.gamma.conv1.bias", "block_2_1_right.norm.gamma.conv2.weight", "block_2_1_right.norm.gamma.conv2.bias", "block_2_1_right.norm.beta.conv1.weight", "block_2_1_right.norm.beta.conv1.bias", "block_2_1_right.norm.beta.conv2.weight", "block_2_1_right.norm.beta.conv2.bias", "block_2_2_right.conv.weight", "block_2_2_right.conv.bias", "block_2_2_right.norm.gamma.conv1.weight", "block_2_2_right.norm.gamma.conv1.bias", "block_2_2_right.norm.gamma.conv2.weight", "block_2_2_right.norm.gamma.conv2.bias", "block_2_2_right.norm.beta.conv1.weight", "block_2_2_right.norm.beta.conv1.bias", "block_2_2_right.norm.beta.conv2.weight", "block_2_2_right.norm.beta.conv2.bias", "vision_2.conv.conv.weight", "vision_2.conv.conv.bias", "vision_2.conv.norm.gamma.conv1.weight", "vision_2.conv.norm.gamma.conv1.bias", "vision_2.conv.norm.gamma.conv2.weight", "vision_2.conv.norm.gamma.conv2.bias", "vision_2.conv.norm.beta.conv1.weight", "vision_2.conv.norm.beta.conv1.bias", "vision_2.conv.norm.beta.conv2.weight", "vision_2.conv.norm.beta.conv2.bias", "upconv_1.weight", "upconv_1.bias", "block_1_1_right.conv.weight", "block_1_1_right.conv.bias", "block_1_1_right.norm.gamma.conv1.weight", "block_1_1_right.norm.gamma.conv1.bias", "block_1_1_right.norm.gamma.conv2.weight", "block_1_1_right.norm.gamma.conv2.bias", "block_1_1_right.norm.beta.conv1.weight", "block_1_1_right.norm.beta.conv1.bias", "block_1_1_right.norm.beta.conv2.weight", "block_1_1_right.norm.beta.conv2.bias", "block_1_2_right.conv.weight", "block_1_2_right.conv.bias", "block_1_2_right.norm.gamma.conv1.weight", "block_1_2_right.norm.gamma.conv1.bias", "block_1_2_right.norm.gamma.conv2.weight", "block_1_2_right.norm.gamma.conv2.bias", "block_1_2_right.norm.beta.conv1.weight", "block_1_2_right.norm.beta.conv1.bias", "block_1_2_right.norm.beta.conv2.weight", "block_1_2_right.norm.beta.conv2.bias", "conv1x1.weight", "conv1x1.bias".

alizare95 commented 9 months ago

Hello,

Thank you for your work on this project. Would it be possible to restore access to these pretrained model weights or provide updated links? Having access to pretrained weights would really help me hit the ground running with the project.

checkpoints/dino_pretrained_checkpoint0033_4scale.pth checkpoints/dino_pretrained_checkpoint0029_4scale_swin.pth

@xyzlancehe @vgthengane

AlexanderPeter commented 9 months ago

Hello,

Thank you for your work on this project. Would it be possible to restore access to these pretrained model weights or provide updated links? Having access to pretrained weights would really help me hit the ground running with the project.

checkpoints/dino_pretrained_checkpoint0033_4scale.pth checkpoints/dino_pretrained_checkpoint0029_4scale_swin.pth

@xyzlancehe @vgthengane

The problem is this: The code uses a random train/validation split. If anyone shares his/her model weights but you use your own split, the results get distorted. It is possible to execute the listed commands and you'll receive your own trained models.