Closed skynox03 closed 3 years ago
Hi @ShivamArora-SA,
You probably used the same seed for the two training runs? Loading a saved model only loads the policy/value neural networks, but the outputs should be different if you select two different seeds (through the --seed argument of experiments.py) for the policy/environment randomness.
Hi,
I am facing a problem with logging file of the results after training. Whenever I run a training by loading a saved model, the output log file (containing the reward details of each episode) of the old training is over written by the the results saved in the new log file. for example, i ran a training for 3000 episodes, then a new training of 5000 episodes. But now the log file from 3000 episode folder has the results from 5000 episode training saved in it. I have attached a screenshot too, the results are identical. How can i solve this issue?