danijar / dreamerv2

Mastering Atari with Discrete World Models
https://danijar.com/dreamerv2
MIT License
886 stars 195 forks source link

Two questions about the paper #1

Closed xlnwel closed 3 years ago

xlnwel commented 3 years ago

Hi @danijar,

Thank you for the great work of DreamerV2 and for sharing the code. It's great news that DreamerV2 achieves SOTA performance on Atari games. But I have two questions about the paper, hoping you can help me clarify them.

  1. On page 6 Actor loss function. You mentioned that Reinforce is unbiased. To my best knowledge Reinforce is unbiased only when it's used with Monte Carlo returns. In the case of DreamerV2, which uses TD(𝝀) to approximate the return, biases should be introduced because of the error introduced by the function approximator. Am I right? Furthermore, I have a conjecture about why maximizing TD(𝝀) performs worse than Reinforce: when maximizing TD(𝝀), the gradients were first flowed through the world model $p(s'|s,a)$ then the policy model. This could possibly causes two problems: 1) the error of the world model may be propagated to the policy model, resulting in inaccurate gradient estimates. 2) the vanishing/exploding gradient problem may occur as now the predition graph becomes very deep and there is no mechanism such as skip connection to deal with the problem. I acknowledge that the second problem may be partially addressed by the intermediate policy losses at each imagine step but I'd like to bring it up here for discussion. What do you think about my conjecture?
  2. On page 9 when discussing the potential advantage of categorical latents, you mention: "a term that would otherwise scale the gradient." Which term do you suggest? Is it $\epsilon$ used in reparameterization?
danijar commented 3 years ago

Thanks for your questions. You're right that learning a policy in imagination always comes with bias from using the model rather than the actual environment. Reinforce is unbiased for the imagination MDP, not the actual environment. The return estimate (e.g. TD-lambda, Monte Carlo returns) is independent of the gradient estimator (e.g. Reinforce, straight-through). DreamerV2 uses TD-lambda returns and optimizes them via Reinforce. I'm sorry but I don't have time to discuss open research questions or hypotheses in detail here.

Regarding your second question, I'm not very confident that this is actually what's going on, but the idea is that the chain rule goes grad_params objective = grad_sample objective * grad_suffstats sample * grad_params suffstats. For Gaussian latents, the grad_suffstats sample term would be grad_{mean,stddev} (stddev * epsilon + mean). For categorical variables, we can't compute that term which is why reparameterization isn't possible. Straight-through gradients just ignore the term. Because the term would be either smaller or larger than 1, it would scale the gradient and potentially contribute vanishing or exploding gradients.

mctigger commented 2 years ago

From the paper:

Reinforce gradients and straight-through gradients, which backpropagate directly through the learned dynamics. Intuitively, the low-variance but biased dynamics backpropagation could learn faster initially and the unbiased but high-variance could to converge to a better solution.

Is the bias (with respect to the imagined mdp) a property of straight-through gradients? So would reparameterization such as in DreamerV1 also be unbiased (with respect to the imagined mdp)?

Best regards, Tim

danijar commented 2 years ago

Yes, that's right. But reparameterization only works for continuous latent variables and actions, not for categoricals.