Closed ToABetterDay closed 1 year ago
I guess this is caused by a configuration conflict between the experiment OrienterNet_MGL_reproduce
and checkpoint orienternet_mgl.ckpt
. It seems that the orienternet_mgl.ckpt
is the downloaded official checkpoint. In its configuration, the unary_prior
is set to true
, so the map_encoder
also produces position prior as the 9th channel. However, your configuration (the default configuration) set unary_prior
to false
, so the map encoder only outputs 8 channels. Result in size mismatch.
To solve this problem, I guess you could set "unary_prior": true
in orienternet.yaml
. But you will probably meet other conflicts. I fine-tuned on a model trained from scratch by myself.
Thank you! After set unary_prior to true, there is no size mismatch and the fine tune works.
Hi, thanks for your sharing. When I tried to fine tune the model on KITTI dataset using command: python -m maploc.train experiment.name=OrienterNet_MGL_kitti data=kitti experiment.gpus=1 data.loading.train.batch_size=2 training.finetune_from_checkpoint='"experiments/OrienterNet_MGL_reproduce/orienternet_mgl.ckpt"'
It generates such error: size mismatch for model.map_encoder.encoder.adaptation.0.0.weight: copying a param with shape torch.Size([9, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([8, 64, 1, 1]). size mismatch for model.map_encoder.encoder.adaptation.0.0.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([8]).
I want to ask what size should be correct, I tried change configuration in orienternet.yaml but it doesn`t work.