Open CDchenlin opened 1 month ago
Have you checked the keys in the model.name_parameters() and the checkpoint?
I remember there are some name issue when loading the DINO checkpoint. But for our weights, this should not happen.
Hi, I downloaded the checkpoint "DINO2reg-ViT-Large" and using it to finetune on a custom dataset. However, when loading the state_dictionary, I noticed that modules name have an extra "depth_model.encoder" or "depth_model.decoder" in the naming. for example:
Missing key(s) in state_dict: "cls_token", Unexpected key(s) in state_dict: "depth_model.encoder.cls_token".
I removed the "depth_mode.encoder (or depth_model.decoder) from the names but now the error states that mask_token is missing while many modules are unexpected:
Missing key(s) in state_dict: "mask_token". Unexpected key(s) in state_dict: "token2feature.read_3.readoper.project_patch.weight", "token2feature.read_3.readoper.project_patch.bias", "token2feature.read_3.readoper.project_learn.weight", "token2feature.read_2.readoper.project_patch.weight",...
Does anyone have an idea how to fix this issue ? Thanks
Nevermind. I was actually loading the metric3d trained weights while I was supposed to load the backbone from DINO.
Hi authors,
Thank you for your work. Thank you for your work. When I tried to evaluate the pretrained model, I encountered a RuntimeError while running test_vit.sh: Error(s) in loading state_dict for DinoVisionTransformer: Missing key(s) in state_dict: report.” I believe this error is similar to the one reported in issue #101.I modified this line from
model.load_state_dict(new_state_dic, strict=True)
tomodel.load_state_dict(new_state_dict['model_state_dict'], strict=True)
, but the issue persists. Could you please provide any assistance to resolve this problem?Thank you again for your kind help.