Closed Hoodythree closed 4 years ago
Calling torch.load('model_best.pth')
will require parse_config
module, since that dictionary has config object as an item.
You can remove this item by simply commenting out that line in the base_trainer.py
.
state = {
'arch': arch,
'epoch': epoch,
'state_dict': self.model.state_dict(),
'optimizer': self.optimizer.state_dict(),
'monitor_best': self.mnt_best,
# 'config': self.config
}
Then refering to test.py#L34-L37, load checkpoint. Unfortunately, checkpoints saved this way would not be usable for resuming training process. If you don't want this or have to use existing checkpoint which was already saved with config object, you can make a simple script removing the config object from checkpoint file.
import torch
checkpoint_path = 'model_best.pth'
checkpoint = torch.load('model_best.pth', map_location=torch.device('cpu'))
del checkpoint['config']
torch.save('updated_model_best.pth')
Then, run this script at the project root dir, since it requires parse_config
module.
Thanks a lot.
@SunQpark
I think you are missing checkpoint in torch.save(checkpoint, 'updated_model_best.pth')
Great work! I have a little question about loading checkpoints. The checkpoint in your project saved in this way:
And I just want to load the checkpoint in a very simple way for some reason, just like this:
But an error occurred:
Is there any way can do this simple checkpoints loading(without 'parse_config')? Thanks in advance.