Hi,
There seems to be an issue when I try to load the vit-b_CXR_0.5M_mae.pth file as a ViTMAE base with 16x16 patch size using models_mae.py module.
here is my code:
import models_mae as mm
vitmae = mm.mae_vit_base_patch16()
vitmae.load_state_dict(
state_dict=torch.load(
f="vit-b_CXR_0.5M_mae.pth")
)
I get the following error:
Error(s) in loading state_dict for MaskedAutoencoderViT:
Missing key(s) in state_dict: "cls_token", "pos_embed", ...
Unexpected key(s) in state_dict: "model", "optimizer", "epoch", "scaler", "args".
Hi, There seems to be an issue when I try to load the vit-b_CXR_0.5M_mae.pth file as a ViTMAE base with 16x16 patch size using models_mae.py module. here is my code:
I get the following error: