RuntimeError: Error(s) in loading state_dict for DINO:
size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.decoder.class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s
hape in current model is torch.Size([5, 256]).
size mismatch for transformer.decoder.class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in
current model is torch.Size([5]).
size mismatch for transformer.enc_out_class_embed.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the sha
pe in current model is torch.Size([5, 256]).
size mismatch for transformer.enc_out_class_embed.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in c
urrent model is torch.Size([5]).
size mismatch for label_enc.weight: copying a param with shape torch.Size([92, 256]) from checkpoint, the shape in current model is
torch.Size([6, 256]).
size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
size mismatch for class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode
l is torch.Size([5, 256]).
size mismatch for class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to
rch.Size([5]).
RuntimeError: Error(s) in loading state_dict for DINO: size mismatch for transformer.decoder.class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s hape in current model is torch.Size([5, 256]). size mismatch for transformer.decoder.class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([5]). size mismatch for transformer.decoder.class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s hape in current model is torch.Size([5, 256]). size mismatch for transformer.decoder.class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([5]). size mismatch for transformer.decoder.class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s hape in current model is torch.Size([5, 256]). size mismatch for transformer.decoder.class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([5]). size mismatch for transformer.decoder.class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s hape in current model is torch.Size([5, 256]). size mismatch for transformer.decoder.class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([5]). size mismatch for transformer.decoder.class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s hape in current model is torch.Size([5, 256]). size mismatch for transformer.decoder.class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([5]). size mismatch for transformer.decoder.class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the s hape in current model is torch.Size([5, 256]). size mismatch for transformer.decoder.class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is torch.Size([5]). size mismatch for transformer.enc_out_class_embed.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the sha pe in current model is torch.Size([5, 256]). size mismatch for transformer.enc_out_class_embed.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in c urrent model is torch.Size([5]). size mismatch for label_enc.weight: copying a param with shape torch.Size([92, 256]) from checkpoint, the shape in current model is torch.Size([6, 256]). size mismatch for class_embed.0.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode l is torch.Size([5, 256]). size mismatch for class_embed.0.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to rch.Size([5]). size mismatch for class_embed.1.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode l is torch.Size([5, 256]). size mismatch for class_embed.1.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to rch.Size([5]). size mismatch for class_embed.2.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode l is torch.Size([5, 256]). size mismatch for class_embed.2.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to rch.Size([5]). size mismatch for class_embed.3.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode l is torch.Size([5, 256]). size mismatch for class_embed.3.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to rch.Size([5]). size mismatch for class_embed.4.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode l is torch.Size([5, 256]). size mismatch for class_embed.4.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to rch.Size([5]). size mismatch for class_embed.5.weight: copying a param with shape torch.Size([91, 256]) from checkpoint, the shape in current mode l is torch.Size([5, 256]). size mismatch for class_embed.5.bias: copying a param with shape torch.Size([91]) from checkpoint, the shape in current model is to rch.Size([5]).