cycraig / MP-DQN

Source code for the dissertation: "Multi-Pass Deep Q-Networks for Reinforcement Learning with Parameterised Action Spaces"
MIT License
191 stars 52 forks source link

Questions about class SARSA(lambda) #3

Open sasforce opened 5 years ago

sasforce commented 5 years ago

Hello. I have a few questions. 1, What the effect of variable "shrink" in the class "SarsaLambdaAgent"? And Can I use other basis instead, like the polynomial basis? 2, Why do you scale step size of SARSA agent? Can I use a fixed one instead?

cycraig commented 5 years ago
  1. The shrink variable is just a scaling factor per basis function term optionally provided by the basis to help prevent extremely large values and divergence. This is only used for the Fourier basis in this code, and by default is a vector of 1's for the other basis functions (no scaling). You can pass in any basis you want in the SarsaLambdaAgent constructor; an example of this is shown in run_goal_qpamdp.py.

  2. The automatic scaling of the learning rate (alpha) for Sarsa(λ) is from William and Barto [2012]. It automatically downscales alpha to avoid divergence during training, which is useful since you no longer need to manually tune alpha but can lead to slow learning. You can turn this feature off by passing scale_alpha=False and setting alpha to a fixed value in the SarsaLambdaAgent constructor. This is again shown in run_goal_qpamdp.py.

sasforce commented 5 years ago

Thank you for your kind reply!