Closed Foggzie closed 6 years ago
@GuntherFox You can set the --keep-checkpoints option to a high enough value, so that older cptk files don't get deleted. After training, create a backup of your models folder, then edit the "checkpoint" file and set model_checkpoint_path to the cptk file associated with the best results. Now, when you rerun learn.py with the --load option, training should resume from that checkpoint.
Awesome, thanks a bunch! This is exactly what I was lookin' for.
Thanks @mbaske, you just saved me a few hours of retraining after pyton threw a wobbly. 👍
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
I'm having an issue where, when I leave an agent learning overnight, I return and it has gotten significantly worse and the cumulative reward never returns to an upward trend. What can I be doing wrong and can I "rewind" the model to its peak state?