Open ideasplus opened 1 year ago
Hi!
Thank you for your question. Which pre-trained weights are you loading in the model? If you are training from scratch, you can set the variable reuse_pos_emb: false
in the config file.
Hi! Thank you for your question. Which pre-trained weights are you loading in the model? If you are training from scratch, you can set the variable
reuse_pos_emb: false
in the config file.
Hi
I load the pre-trained model model_skitti_train_cs_init_h128.pth
, downloaded according to your provided link.
Init a recoder at trained_models/log_exp_kitti
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
Loading pretrained parameters from pretrained_models/model_skitti_train_cs_init_h128.pth
dict_keys(['model', 'epoch'])
Hi @ideasplus. Can you share the command that you are running for using the code?
Hi @ideasplus. Can you share the command that you are running for using the code?
Sure, I use the following command:
python -m torch.distributed.launch --nproc_per_node=4 --master_port=63545 --use_env main.py 'config_kitti.yaml' ./semantic-kitti/sequences/ --save_path trained_models/ --pretrained_model pretrained_models/model_skitti_train_cs_init_h128.pth
Hello, I have encountered the same problem as you, have you solved it
Hi, there
When I try to run your codes on the Semantic-KITTI dataset, I met an error as follows
Reusing positional embeddings. Traceback (most recent call last): File "main.py", line 334, in <module> exp = Experiment(settings) File "main.py", line 80, in __init__ self.model = self._initModel() File "main.py", line 91, in _initModel model = build_rangevit_model( File "main.py", line 32, in build_rangevit_model model = models.RangeViT( File "/data/rangevit/models/rangevit.py", line 347, in __init__ resized_pos_emb = resize_pos_embed(pretrained_state_dict['encoder.pos_embed'], KeyError: 'encoder.pos_embed'
Could you tell me how to solve this problem?
Hi! Thank you for your question. Which pre-trained weights are you loading in the model? If you are training from scratch, you can set the variable
reuse_pos_emb: false
in the config file.
I have encountered the same error KeyError: 'encoder.pos_embed'
after setting the variable reuse_pos_emb: false
python -m torch.distributed.launch --nproc_per_node=4 --master_port=63545 --use_env main.py 'config_kitti.yaml' ./semantic-kitti/sequences/ --save_path trained_models/ --pretrained_model pretrained_models/model_skitti_train_cs_init_h128.pth
Hello. The --pretrained_model
argument expects a pre-trained image backbone, which will be used for initializing the ViT encoder inside RangeViT. You can download the pre-trained weights for these image backbones from here:
In particular, we initialize RangeViT’s backbone with ViTs pretrained (a) on supervised ImageNet21k classification and fine-tuned on supervised image segmentation on Cityscapes with Segmenter (entry Cityscapes) (b) on supervised ImageNet21k classification (entry IN21k), (c) with the DINO self-supervised approach on ImageNet1k (entry DINO), and (d) trained from scratch (entry Random). The Cityscapes pre-trained ViT encoder weights can be downloaded from here.
In the command that you sent, it seems that you give as a path a pre-trained RangeViT model, which is not what the training scripts expects in this --pretrained_model
argument. Also, there is no need to train this already pre-trained RangeViT model. You can directly evaluate it by running:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=63545 \
--use_env main.py 'config_nusc.yaml' \
--data_root './semantic-kitti/sequences/' \
--save_path '<path_to_log>' \
--checkpoint 'pretrained_models/model_skitti_train_cs_init_h128.pth' \
--val_only
Hi @gidariss
I am trying to run evaluation of the pre-trained RangeViT model on nuScenes dataset. However, it seems the pre-trained RangeVit model doesn't provide key named "epoch" in it, which is needed in _loadCheckpoint
. I suppose I can set it to an arbitrary number, right?
Hi, there
When I try to run your codes on the Semantic-KITTI dataset, I met an error as follows
Could you tell me how to solve this problem?