-
http://lczero.org/ looks like the project is dead:
![image](https://user-images.githubusercontent.com/29419994/44040419-84c3cf66-9f1b-11e8-8338-021ee0f140b5.png)
A new message is needed to explain…
-
I mentioned this with regard to AlphaZero at talkchess as well. Time management is not some separate thing. It is, and has been a part of the rules of chess for more than 100 years. The point of a …
-
Could you please at least comment the code so I can reverse-engineer some of the intentions?
-
可以加入AI训练吗?
我们可以训练AI
我们需要最新的AI棋谱
https://katagotraining.org/ 这是一个非常成功的AI训练例子,你可以参考看看。虽然它是围棋的,它现在已经拥有2千万张棋谱,还在不断的增长。
象棋很多人还用那些古谱来练习象棋,那些已经过时了,象棋比围棋更有优势,象棋可以马上让人进入深度计算思维。
仙女鳕鱼 象棋AI引擎
https://gith…
-
There's a difference between reinforcement and supervised learning in the AGZ paper. The paper mentioned that although for the reinforcement version, the loss function is like "loss = action_loss + va…
-
https://github.com/gcp/leela-zero/pull/1252
For chess this applies to `go nodes N` or `go movetime T`. Some users are noticing that `go nodes N` stops before it reaches N nodes, and it's a bit conf…
-
[This goes beyond Leela 0. I am not sure where to post it, please feel free to move it somewhere relevant where these discussions are more suitable.]
**Background**: One of the things that helps to…
DaghN updated
4 years ago
-
[GoNN](https://github.com/lightvector/GoNN) is a sandbox for agents similar to the Leela projects. Notable ideas tested in GoNN, at the time of creating this issue, are the following:
| "Cosme…
-
I'm going to post a number of observations about training here that led me to a hypothesis about network training that affects Leela Zero, Leela Chess and Minigo.
For Leela Zero, during regular rei…
-
Hi, @benediamond. I like this approximation very much. I'll try to help in my possibilities.
I don't know yet the source code of this project too much but I think we should implement a supervised l…