Closed xlnwel closed 3 years ago
Thanks for your questions. You're right that learning a policy in imagination always comes with bias from using the model rather than the actual environment. Reinforce is unbiased for the imagination MDP, not the actual environment. The return estimate (e.g. TD-lambda, Monte Carlo returns) is independent of the gradient estimator (e.g. Reinforce, straight-through). DreamerV2 uses TD-lambda returns and optimizes them via Reinforce. I'm sorry but I don't have time to discuss open research questions or hypotheses in detail here.
Regarding your second question, I'm not very confident that this is actually what's going on, but the idea is that the chain rule goes grad_params objective = grad_sample objective * grad_suffstats sample * grad_params suffstats
. For Gaussian latents, the grad_suffstats sample
term would be grad_{mean,stddev} (stddev * epsilon + mean)
. For categorical variables, we can't compute that term which is why reparameterization isn't possible. Straight-through gradients just ignore the term. Because the term would be either smaller or larger than 1, it would scale the gradient and potentially contribute vanishing or exploding gradients.
From the paper:
Reinforce gradients and straight-through gradients, which backpropagate directly through the learned dynamics. Intuitively, the low-variance but biased dynamics backpropagation could learn faster initially and the unbiased but high-variance could to converge to a better solution.
Is the bias (with respect to the imagined mdp) a property of straight-through gradients? So would reparameterization such as in DreamerV1 also be unbiased (with respect to the imagined mdp)?
Best regards, Tim
Yes, that's right. But reparameterization only works for continuous latent variables and actions, not for categoricals.
Hi @danijar,
Thank you for the great work of DreamerV2 and for sharing the code. It's great news that DreamerV2 achieves SOTA performance on Atari games. But I have two questions about the paper, hoping you can help me clarify them.