From image captioning tutorial is there any way to continue training from the pre-trained model? If so how do we load the model's state_dict and the optimizer's state_dict? As far as i can make out I am unable to load these attributes from the pickle files decoder-5-3000.pkl and encoder-5-3000.pkl files.
So I can't perform the following:
checkpoint = torch.load('decoder-5-3000.pkl')model.load_state_dict(checkpoint['model_state_dict'])optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
From image captioning tutorial is there any way to continue training from the pre-trained model? If so how do we load the model's state_dict and the optimizer's state_dict? As far as i can make out I am unable to load these attributes from the pickle files decoder-5-3000.pkl and encoder-5-3000.pkl files.
So I can't perform the following:
checkpoint = torch.load('decoder-5-3000.pkl')
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
Any help would be much appreciated.