lightvector / KataGo

GTP engine and self-play learning in Go
https://katagotraining.org/
Other
3.47k stars 561 forks source link

Philosophical considerations motivated by the advances of KataGo and other engines #351

Open egri-nagy opened 3 years ago

egri-nagy commented 3 years ago

Our new paper `The Game Is Not over Yet—Go in the Post-AlphaGo Era' is available online. You may find the discussion interesting, and we would be happy to hear your comments.

https://www.mdpi.com/888894

lightvector commented 3 years ago

Thanks for the link, I skimmed through the paper now. I wonder if many of your questions might be fruitfully approached by first starting with smaller boards. For example, we have partial certainty now that optimal play has been reached on 7x7 (no proof, but AlphaZero-style training seems to consistently reach the same conclusion on multiple runs and give very confident winrates). We also appear to be close to optimal on 9x9, with top bots having increasingly high draw rates, especially with very long thinking times.

I would imagine that if you were to measure the various things you look at - complexity measures of the game, or amount of time needed to converge to a given level of confidence about the optimal score, etc, as a function of board size, or compressibility of knowledge on small boards, that would give some insight as to the scaling of those answers as you go up towards 19x19. :)