Open adibyte95 opened 6 years ago
Keep in mind that epsilon
begins at 1 and decays by 99.5% it's value every 32 time steps as given by batch_size
. The policy given by the act method will be random most of the time and can take several episode before getting below 50%.
with a batch size of 128 and the same epsilon decay rate this repo shows impressive result when the model is reloaded. i ran the model several times and it showed good result all the time. The model was trained for less than 1000 episodes
It looks like the difference is that keon's code saves and loads the weights whereas your altered code saves and loads the model. I don't have much experience with keras but I would assumed the value of epsilon is reset to 1 when loading the weights and initializing the model
yes i was having a hard time with saving the weights so i switched to saving the model instead, with this the results were much better and i recommend the same . i cannot really comment on your assumption though
Adibyte95, Saving model is similar to save weight (because the numbers of nodes are constant). Is there anything more? Why is made your code more accurate than the code of Keon?
I am not sure ....maybe initial weights and not partially trained weights are loaded
@keon Is it similar to DDQN? (You mean by DDQN is double or duel DQN?) Saving model is similar to save weight (because the numbers of nodes are constant). Is there anything more? Why is made your code more accurate than the code of Keon?
ddqn is double here since there is no dueling implementation in the repo. The hyperparameters affect the performance of a model greatly. When comparing implementations please make sure to fix seed samples and hyperparameters.
on reloading the model performs very poorly as compared to training