Closed HobisPL closed 7 months ago
See train.yaml checkpoint path, under configs.
See train.yaml checkpoint path, under configs.
Do you mean the 'ckpt_path:' ? But that is for resuming training; epochs, etc., are from the previous training.
Yes you can resume with a new dataset and set epochs to 0 using on load checkpoint, see pytorch lightning details for more info.
Ok, thanks for the reply
Hi, in train.py after Instantiating model ... add this
if cfg.get("transfer_ckpt_path"):
model.load_state_dict(torch.load(cfg.get("transfer_ckpt_path"), map_location="cpu")['state_dict'])
log.info(f"Loaded model from {cfg.get('transfer_ckpt_path')}")
In train.yaml, add a new flag "transfer_ckpt_path" (like cpkt_path). This flag will just load the state dict (weights) and ignore everything else from the pretrained model. Try and let me know if it works.
if you use pytorch lightning , load_from_checkpoint , it loads everything (hparams, etc) and then it is basically almost like resuming training, instead of transfer. So i think using pytorch state dict load is better.
if you use pytorch lightning , load_from_checkpoint , it loads everything (hparams, etc) and then it is basically almost like resuming training, instead of transfer. So i think using pytorch state dict load is better.
Hi, It is working , Thank you!
I've trained a model and would like to use it as a base for training another speaker, where should I specify the path to the pre-trained model?