After a couple of epochs of training, the model has saved a number of checkpoints. I want to resume the training from the latest checkpoint. But when I input the actual_resume parameter with the saved ckpt path, I get an error saying that the key state_dict doesn't exist. When I checked the outputs of the torch.load in load model from config, I found it to be an empty dictionary. Is there a different parameter I need to use to resume training?
After a couple of epochs of training, the model has saved a number of checkpoints. I want to resume the training from the latest checkpoint. But when I input the actual_resume parameter with the saved ckpt path, I get an error saying that the key state_dict doesn't exist. When I checked the outputs of the torch.load in load model from config, I found it to be an empty dictionary. Is there a different parameter I need to use to resume training?