rotemtzaban / STIT

MIT License
1.2k stars 170 forks source link

Checkpoint architecture #45

Open skyler14 opened 1 year ago

skyler14 commented 1 year ago

I am interested in being able to reload a trained checkpoint file and use that to repeat several of the steps in the training file, namely I'd like to try extrapolating out the face reconstruction made from our tuned model to more frames to see the results. However I've noticed that the file we end up saving with:

save_tuned_G(run_id)

is significantly different from the model we initially load up when we run load_old_G(). For starters we actually initially loaded a pkl and now are saving torch .pt files. Was there any weights or other data we left out when we saved the tuned files which must also be saved if we want to reload tuned run data inside?

Or could we basically take the generator loaded in edit_video.py and assign that to a coach.G in place of how we ran self.G=load_old_G when the train.py called the coach.train() function. Is this all it would take to resume with our training runtime model?