-
This seems to be a waste of computation, and thus should be fixed.
[markgerads@localhost autogtp]$ ./autogtp -s -u 1 -k sgf
AutoGTP v16
Using 1 thread(s) for GPU(s).
Starting tuning process, ple…
-
In [A0](https://arxiv.org/abs/1712.01815), 64 TPU workers are used to train the network. That means that with their batch size of 4096, each worker independently processes a subset of every minibatch …
-
Pretty sure we have no resign, but there probably is support from leela zero? Probably we should start with .01 and then raise it to .10 once we have a GM level network or TB adjudication. It might no…
-
How did you tune the --mcts_puct values? Is it true different values are used for generating self-play games for training vs match play?
I think self-play for training uses --mcts_puct 0.85
https…
-
I matched 174 (40B) and 157 (15B) with 1 visit each for 100 games, CPU only. The 40B weights won 86.0% of the games - almost exactly the expectation based on the ELO difference between the two.*
…
-
It's possible that AlphaGo Master (starting from supervised learning) is much stronger than AlphaZero. I say this because of http://www.computer-go.org/pipermail/computer-go/2017-October/010357.html a…
-
https://sports.sina.cn/others/qipai/2018-07-18/detail-ihfnsvza2158707.d.html?from=wap
-
_(This is perhaps a long-term 'wish' request, but on the other hand it might also be of practical use, or at least theoretical interest.)_
The current implementation seems to be hard-coded/fixed to…
-
I looked at some recent match games between d0187996 and e0d2cc1a. I wanted to take a look at close matches, so the ones that have a score instead of a resign. The game 55e570b388059f1f332c0dc2f87de23…
-
Leela Chess had from the beginning of its implementation set up two parameters policy_loss_weight and value_loss_weight in training.py for the purpose of adjusting the total training loss term relativ…