kblomdahl / dream-go

Artificial go player based on reinforcement and supervised learning
Apache License 2.0
47 stars 8 forks source link

Re-balance search tree size vs neural network size #39

Closed kblomdahl closed 5 years ago

kblomdahl commented 5 years ago

When playing on Go servers such as CGOS (with a time limit of 15 minutes per side) we typically run up about 150,000 rollouts per move. This is not bad per se, but the benefits from searching tends to encounter diminishing returns past 32,000 nodes†. So we are wasting a lot of FLOPS on rollouts that has little benefit to the final result.

To fix this we can increase the neural network size, which will reduce the search tree size in favour of a more accurate search. But a larger neural network will require more data to train.

† Based on how many rollouts are necessary to visit all root candidates on an empty board at least once.

Available sizes

Today we use a 9 deep, and 128 wide neural network. We can alter the depth and width almost arbitrary, but there are some constraints :

The number of parameters of a model can be calculated from:

depth · (18 · width² + 2 · width + 1) + 38 · width + 354,655

And the FLOPS necessary for inference can be calculated from:

361 · depth · (18 · width² + 2 · width + 1) + 361 · 38 · width + 354,655

There are two parameters that influence these values width and depth, which of the two we want to increase is problem specific and so far there is no sound theoretical background we can use to guide us. Further evaluating each point requires huge amounts of time, so I am going to cherry pick some candidates based on personal intuition, and ignore the rest of the candidates:

Depth Width # Parameters FLOP
9 128 3,016,040 961,114,640
9 256 10,985,832 3,838,209,552
16 192 10,984,943 3,837,888,623
23 160 10,966,518 3,831,237,198
kblomdahl commented 5 years ago

Trained from about 250,000 professional Foxy games. These are the final validation scores based on 10,000 professional games (from before AlphaGo, so we may have to pick a different dataset):

Depth Width Policy (1) Policy (3) Policy (5) Value
9 128 0.48574310541152954 0.7269249558448792 0.8160569071769714 0.6877566576004028
9 256 0.5018187761306763 0.745582103729248 0.8326722979545593 0.6977071762084961
16 192 0.5031564831733704 0.7433526515960693 0.8325315117835999 0.6984581351280212
23 160 0.5038605332374573 0.7408650517463684 0.8261481523513794 0.6942808032035828

Tournament

The result for 9x256, 16x192, and 23x160 looks very similar, this could be due to a few different reasons:

Play testing of the networks may reveal more interesting details. The play test will be setup such that:

dg-16x192 v dg-9x128 (37/200 games)
unknown results: 1 2.70%
board size: 19   komi: 7.5
            wins              black         white       avg cpu
dg-16x192     28 75.68%       11 57.89%     17 94.44%    865.58
dg-9x128       8 21.62%       1   5.56%     7  36.84%    881.52
                              12 32.43%     24 64.86%

dg-16x192 v dg-23x160 (37/200 games)
unknown results: 1 2.70%
board size: 19   komi: 7.5
            wins              black         white       avg cpu
dg-16x192     23 62.16%       10 52.63%     13 72.22%    708.71
dg-23x160     13 35.14%       4  22.22%     9  47.37%    683.93
                              14 37.84%     22 59.46%

dg-16x192 v dg-9x256 (37/200 games)
unknown results: 3 8.11%
board size: 19   komi: 7.5
            wins              black         white       avg cpu
dg-16x192     18 48.65%       8  42.11%     10 55.56%    838.37
dg-9x256      16 43.24%       8  44.44%     8  42.11%    858.27
                              16 43.24%     18 48.65%

dg-9x128 v dg-23x160 (36/200 games)
unknown results: 5 13.89%
board size: 19   komi: 7.5
            wins              black         white       avg cpu
dg-9x128      10 27.78%       5  27.78%     5  27.78%    855.59
dg-23x160     21 58.33%       10 55.56%     11 61.11%    779.41
                              15 41.67%     16 44.44%

dg-9x128 v dg-9x256 (36/200 games)
unknown results: 2 5.56%
board size: 19   komi: 7.5
           wins              black         white       avg cpu
dg-9x128      7 19.44%       1   5.56%     6  33.33%    849.99
dg-9x256     27 75.00%       11 61.11%     16 88.89%    868.63
                             12 33.33%     22 61.11%

dg-23x160 v dg-9x256 (36/200 games)
board size: 19   komi: 7.5
            wins              black         white       avg cpu
dg-23x160     15 41.67%       8  44.44%     7  38.89%    820.20
dg-9x256      21 58.33%       11 61.11%     10 55.56%    787.45
                              19 52.78%     17 47.22%

Elo

dg-9x128:0.6.3                      0.00
dg-23x160:0.6.3                   138.14
dg-9x256:0.6.3                    210.86
dg-16x192:0.6.3                   229.86

Performance

All times are in nanoseconds, according to the bench batch_size command. As expected the deeper models are more expensive to compute in practice, despite having the same FLOPS, since they involve more cuDNN overhead:

Depth Width Batch Size (1) Batch Size (4) Batch Size (8) Batch Size (16) Batch Size (32) Batch Size (256)
9 128 896,035 730,125 750,913 849,179 1,406,923 9,292,403
9 256 1,098,342 1,210,698 1,327,191 2,256,043 3,714,270 27,456,126
16 192 1,449,075 1,602,011 1,677,403 2,856,924 4,679,059 34,597,654
23 160 2,221,460 2,405,039 2,571,189 4,217,678 6,849,057 48,325,717
kblomdahl commented 5 years ago

Evaluation

At the end of the day, the only metric that matters is the playing strength of the final network, and based on the evidence provided in the previous post I suggest we use the architecture with the highest ELO:

16x192

Discussion

Some interesting observations one can make based on the data above: