-
I can't found single_play.py.
How can I start to train it self-mode?
And had you trained with alphago zero method and how about result?
Thanks.
-
從以下主題 (但不限於)中擇一撰寫《程式專案+報告》,可以自製或研究他人專案
1. 優化
2. 搜尋
3. 下棋: [AlphaZero](https://arxiv.org/pdf/1712.01815.pdf), [Code](https://github.com/junxiaosong/AlphaZero_Gomoku), [MCTS](https://ndltd.ncl.edu…
-
Add a strategy using AlphaZero.
-
## 01.10
Done
1. 2个pipeline的构建,search space确定 ✅
2. C值的动态变化,根据是否子节点全部被探索完毕来分配(是3,否10) ✅
To Do:
1. Flexible 剪枝算法
2. Fix & Flex 还可以增加的算法
分clf来训练 / NN 模型研究 / 取subset训练 ✅/ Node Selection时,最高…
-
-
A preprint ([Adversarial Policies Beat Professional-Level Go AIs](https://goattack.alignmentfund.org/?row=0#no_search-board)) was recently published about a strategy for tricking KataGo into passing w…
-
Didn't dive a lot into details, but following README, I've ran othello-zero v117 against edax level 1, and othello-zero lost miserably.. It didn't even capture one corner.
```
A B C D E F G H
1…
-
Hello authors, I am very interested in your work. I am working on a DRL related work. Now, I am planning to add a DQN with MCTS to my project as you did. Would you please share the code or some implem…
-
your .ini file has several sections .. i guess this is the MCTS section :
```
##################################################################
# montecarlo - Use montecarlo tree search (MCTS) …
-
Right now it seems that all proof checks are at the end. This violates the Markovian assumption of Lean gym since a bad tactic upstream can lead to a failure of the final tactic step. Most of the cu…