I am interested in being able to reload a trained checkpoint file and use that to repeat several of the steps in the training file, namely I'd like to try extrapolating out the face reconstruction made from our tuned model to more frames to see the results. However I've noticed that the file we end up saving with:
save_tuned_G(run_id)
is significantly different from the model we initially load up when we run load_old_G(). For starters we actually initially loaded a pkl and now are saving torch .pt files. Was there any weights or other data we left out when we saved the tuned files which must also be saved if we want to reload tuned run data inside?
Or could we basically take the generator loaded in edit_video.py and assign that to a coach.G in place of how we ran self.G=load_old_G when the train.py called the coach.train() function. Is this all it would take to resume with our training runtime model?
I am interested in being able to reload a trained checkpoint file and use that to repeat several of the steps in the training file, namely I'd like to try extrapolating out the face reconstruction made from our tuned model to more frames to see the results. However I've noticed that the file we end up saving with:
save_tuned_G(run_id)
is significantly different from the model we initially load up when we run
load_old_G()
. For starters we actually initially loaded a pkl and now are saving torch .pt files. Was there any weights or other data we left out when we saved the tuned files which must also be saved if we want to reload tuned run data inside?Or could we basically take the
generator
loaded inedit_video.py
and assign that to acoach.G
in place of how we ranself.G=load_old_G
when the train.py called thecoach.train()
function. Is this all it would take to resume with our training runtime model?