Closed yuyang-cloud closed 1 year ago
Hi! Sorry, I pointed to the wrong repo.
We built the MinkowskiNet following the spvnas repo but replaced it with MinkowskiEngine as done by segcontrast and used their pretrained weights.
Since they call the state_dict
as model
and they don't have a batchnorm for each intermediate feature level, you have to modify your code as follows (in the train_model.py
script):
state_dict = torch.load("checkpoints/lastepoch199_model_segment_contrast.pt", map_location='cpu')
model.backbone.load_state_dict(w["model"], strict=False)
If you remove the strict=False
, you'll see that the checkpoint doesn't include the batchnorm layers.
Sorry again and I hope this helps!
Thanks for the quick reply!I still have two questions:
Is the weights 100% labels Fine-tuned semantic segmentation of SegContrast you used?
I downloaded it and got two checkpoints: epoch14_model_segment_contrast_1p0.pt
and epoch14_model_head_segment_contrast_1p0.pt
.
Should I use only the first one to initialize the MinkUNet
(w/o model.bakcbone.sem_head
), or should I use both to load the weights of model.bakcbone.sem_head
at the same time?
In addition, whether the backbone weight is frozen or unfrozen during MaskPLS training? If it is unfrozen, is the learning rate of backbone the same as the entire model?
Thanks again for your kind reply!
Hi! 1) I used just the pre-trained weights with no fine-tuning. I think I tried both and the difference was not so big. I only used the weights for the network and not for the semantic head.
2) The backbone is unfrozen since we want the network to learn meaningful multi-level features. If you want to use the backbone frozen (maybe because the GPU is too small) you could try also just using the weights of the last layer instead of the multi-level.
I hope this helps! I did the training quite some time ago so it might be that I don't remember everything exactly but that's the main idea that I followed.
Ok! Thanks for your reply, and I will have a try. I'll feed back my results here later for further discussion.
Hi! @rmarcuzzi @comradexy I downloaded the backbone weights at SemanticKITTI_val_MinkUNet@114GMACs and used the code below to load the pre-trained weights:
But there's a
Missing keys and Unexpected keys
problem:How do you load pre-trained weights and are the backbone weights frozen during training MaskPLS? Is it convenient to provide the relevant code so that we can reproduce the results? Thanks in advance!
Originally posted by @yuyang-cloud in https://github.com/PRBonn/MaskPLS/issues/7#issuecomment-1652813073