yunshin / SphericalMask

Official implementation of "Spherical Mask: Coarse-to-Fine 3D Point Cloud Instance Segmentation with Spherical Representation"
Apache License 2.0
45 stars 6 forks source link

Using pretrained encoder weights leads to removed and missing keys warnings #8

Open cnmicha opened 4 months ago

cnmicha commented 4 months ago

In README.md, it says:

For the best training result, we recommend initializing the encoder with the pretrained-weights checkpoint(Download) from ISBNet.

I downloaded the .pth file and am trying to train the model. I see the following lines in the log:

2024-06-12 13:32:46,121 - INFO - Load pretrain from ./pretrains/isbnet_pretrain.pth
2024-06-12 13:32:46,336 - INFO - removed keys in source state_dict due to size mismatch: criterion.empty_weight, inst_sem_head.layers.6.weight, inst_sem_head.layers.6.bias
2024-06-12 13:32:46,337 - INFO - missing keys in source state_dict: criterion.empty_weight, criterion.mask_head.layers.0.weight, criterion.mask_head.layers.1.weight, criterion.mask_head.layers.1.bias, criterion.mask_head.layers.1.running_mean, criterion.mask_head.layers.1.running_var, criterion.mask_head.layers.3.weight, criterion.mask_head.layers.4.weight, criterion.mask_head.layers.4.bias, criterion.mask_head.layers.4.running_mean, criterion.mask_head.layers.4.running_var, criterion.mask_head.layers.6.weight, criterion.mask_head.layers.6.bias, criterion.norm.weight, criterion.norm.bias, criterion.norm.running_mean, criterion.norm.running_var, inst_sem_head.layers.6.weight, inst_sem_head.layers.6.bias, rid_module.layers.0.weight, rid_module.layers.1.weight, rid_module.layers.1.bias, rid_module.layers.1.running_mean, rid_module.layers.1.running_var, rid_module.layers.3.weight, rid_module.layers.4.weight, rid_module.layers.4.bias, rid_module.layers.4.running_mean, rid_module.layers.4.running_var, rid_module.layers.6.weight, rid_module.layers.6.bias, inst_center_head.layers.0.weight, inst_center_head.layers.1.weight, inst_center_head.layers.1.bias, inst_center_head.layers.1.running_mean, inst_center_head.layers.1.running_var, inst_center_head.layers.3.weight, inst_center_head.layers.4.weight, inst_center_head.layers.4.bias, inst_center_head.layers.4.running_mean, inst_center_head.layers.4.running_var, inst_center_head.layers.6.weight, inst_center_head.layers.6.bias, inst_inside_mask.layers.0.weight, inst_inside_mask.layers.1.weight, inst_inside_mask.layers.1.bias, inst_inside_mask.layers.1.running_mean, inst_inside_mask.layers.1.running_var, inst_inside_mask.layers.3.weight, inst_inside_mask.layers.4.weight, inst_inside_mask.layers.4.bias, inst_inside_mask.layers.4.running_mean, inst_inside_mask.layers.4.running_var, inst_inside_mask.layers.6.weight, inst_inside_mask.layers.7.weight, inst_inside_mask.layers.7.bias, inst_inside_mask.layers.7.running_mean, inst_inside_mask.layers.7.running_var, inst_inside_mask.layers.9.weight, inst_inside_mask.layers.9.bias, norm.weight, norm.bias, norm.running_mean, norm.running_var, inside_mask_head.layers.0.weight, inside_mask_head.layers.1.weight, inside_mask_head.layers.1.bias, inside_mask_head.layers.1.running_mean, inside_mask_head.layers.1.running_var, inside_mask_head.layers.3.weight, inside_mask_head.layers.4.weight, inside_mask_head.layers.4.bias, inside_mask_head.layers.4.running_mean, inside_mask_head.layers.4.running_var, inside_mask_head.layers.6.weight, inside_mask_head.layers.6.bias, inst_mask_head_angular.0.0.weight, inst_mask_head_angular.0.1.weight, inst_mask_head_angular.0.1.bias, inst_mask_head_angular.0.1.running_mean, inst_mask_head_angular.0.1.running_var, inst_mask_head_angular.1.0.weight, inst_mask_head_angular.1.1.weight, inst_mask_head_angular.1.1.bias, inst_mask_head_angular.1.1.running_mean, inst_mask_head_angular.1.1.running_var, inst_mask_head_angular.2.weight, inst_mask_head_angular.2.bias

Is it supposed to be like that?

If yes, I think it would make sense to document this in the README.