alreadydone / lz

Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper.
GNU General Public License v3.0
28 stars 8 forks source link

Did the newest code can implement stmkomi? #109

Open pangafu opened 5 years ago

pangafu commented 5 years ago

I notice in patch-39 std::vector Network::gather_features change to std::vector Network::gather_features

So if there is a way to implement stmkomi code? thanks~

pangafu commented 5 years ago

I think gpu worker and batchsize seperate maybe greater for stm komi, can you implement the stm komi to the newest code?

alreadydone commented 5 years ago

In order to change the stm (color) planes in patch-39, you need to modify the fourth and fifth parameters of forward0 (btm and wtm): https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/OpenCLScheduler.cpp#L338 https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/Network.cpp#L815-L816 https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/UCTSearch.cpp#L416 https://github.com/alreadydone/lz/blob/70a8aff1aedc9e2af5556abd928cc480fe358078/src/UCTSearch.cpp#L957

When I get a chance I'll try to implement dynamic komi over patch-39, and you are certainly welcome to implement it in the meantime.

Regarding workers and batchsizes: the official branch uses search threads that can send positions to any of the GPUs, while my branch (patch-39 etc.) assigns dedicated worker threads for each GPU and allows the number of worker threads and the batch size configured for each GPU separately. My approach reduces contention between threads and allows a higher n/s to be achieved with many GPUs, but I am not seeing why it might be greater for stm komi.

pangafu commented 5 years ago

The offical branch search too wide when batchsize is large, and stm komi is not well training, many low pn search position will cause bad value, so maybe limit worker number will make search more reasonable.

Wait for your stm komi code~ thanks a lot!

pangafu commented 5 years ago

And in my test, in patch-39, when use offical weight, if increase worker number upper than 2 (such as 3), the gpu usage will increase, pos also increase, but can't beat woker number = 2.

So I think think the weight now seem has many fault value in low pn position, because pn is low mean the weight not well training in that way, search too wide maybe mean more fault.

pangafu commented 5 years ago

Also in stm komi test, when I use 4 or 8 gpu, batchsize > 8 in offical branch stm komi code, the handicap capability is lower than 1 gpu, batchsize = 2/3 run in long time.

So maybe stm komi not suitable search that wide.