lightvector / KataGo

GTP engine and self-play learning in Go
https://katagotraining.org/
Other
3.45k stars 561 forks source link

Monte-Carlo tree search as regularized policy optimization #275

Open Hersmunch opened 4 years ago

Hersmunch commented 4 years ago

I saw this post on reddit and thought this might be of interest here. Paper. Appendix.

Abstract The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to significant advances in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm, still relies on hand-crafted heuristics that are only partially understood. In this paper, we show that AlphaZero’s search heuristics, along with other common ones such as UCT, are an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.

Also found these ICML 2020 presentation slides, which are highlights of paper. Looks like they're calling it PoZero.

lightvector commented 4 years ago

Thanks! Certainly worth experimenting with a little.

dbsxdbsx commented 3 years ago

I am paying attention on this paper too. But no quite familiar with the formula in the paper.