fcdl94 / MiB

Official code for Modeling the Background for Incremental Learning in Semantic Segmentation https://arxiv.org/abs/2002.00718
MIT License
167 stars 44 forks source link

I want to ask a question, how to load the pre-training model, please tell me, thank you very much #40

Closed houxushi closed 2 years ago

fcdl94 commented 3 years ago

We used the pretrained model released by the authors of In-place ABN (as said in the paper), that can be found in their GitHub project: https://github.com/mapillary/inplace_abn.

Since the pretrained are made on multiple-gpus, they contain a prefix "module." in each key of the network. Please, be sure to remove them to be compatible with this code (simply rename them using key = key[7:]). If you don't want to use pretrained, please use --no-pretrained.

houxushi commented 3 years ago

We used the pretrained model released by the authors of In-place ABN (as said in the paper), that can be found in their GitHub project: https://github.com/mapillary/inplace_abn.

Since the pretrained are made on multiple-gpus, they contain a prefix "module." in each key of the network. Please, be sure to remove them to be compatible with this code (simply rename them using key = key[7:]). If you don't want to use pretrained, please use --no-pretrained.

this "module" and key I don't understand and I don't know how to change it in this code,could you tell me the detail?how can I rename them,this"key = key[7:]"I don't know how to apply them,Sorry to disturb you

fcdl94 commented 3 years ago

Oh sure, sorry!

So, basically you can have a script similar to this:

model = torch.load("the checkpoint path") state = {} for k, v in checkpoint.items(): state[k[7:]] = v torch.save(state, "the destination path")

The destination path should be something like pretrained/{opts.backbone}_{opts.norm_act}.pth.tar and it'll be automatically loaded by the code.

Hope it's clearer now :)

houxushi commented 3 years ago

Oh sure, sorry!

So, basically you can have a script similar to this:

model = torch.load("the checkpoint path") state = {} for k, v in checkpoint.items(): state[k[7:]] = v torch.save(state, "the destination path")

The destination path should be something like pretrained/{opts.backbone}_{opts.norm_act}.pth.tar and it'll be automatically loaded by the code.

Hope it's clearer now :)

Thank you very much for taking the time to answer my question!I will study your reply carefully, good luck!:)