Open zhihanyang2022 opened 3 years ago
Hi, Hope you figure out already.
From my understanding, $\text{KL}(\pi\text{new} | \pi\text{old}) = \frac{1}{2} (\pi\text{new} - \pi\text{old})^T A (\pi\text{new} - \pi\text{old}) + o(|\pi\text{new} - \pi\text{old}|)$. The initial step size $\beta$ is chosen such that $\frac{1}{2} (\beta s)^T A (\beta s) \leq \text{KL}\text{max}$, where $\beta s=\pi\text{new} - \pi_\text{old}$.
Since during line search, we only decrease step size $\beta$ during line search, ideally, KL should also only decrease (if we ignore the high order term). Emperically, (I use the torch implementation from https://github.com/ikostrikov/pytorch-trpo) I find KL is always roughly at the $\text{KL}_\text{max}$ (i.e. I never trigger the line search with hyperparameter suggested by authors).
你好!邮件收到啦~谢谢,我会尽快回复的。
The TRPO paper (Appendix C) claims that "we use a line search to ensure improvement of the surrogate objective and satisfaction of the KL divergence constraint". However, in the current codebase, the linesearch function only checks whether the candidate update improves the surrogate advantage, which deviates from the paper and doesn't check for KL constraint.
https://github.com/joschu/modular_rl/blob/master/modular_rl/trpo.py#L92
Could you help me understand the reasoning behind this? @joschu
Thanks!