Instead of using < one can use <= in the comparisons outcome... < best....
However for this to work with picking all best moves, one should change the top level to either not do any alpha beta optimizations or to restore the < behavior by being over cautious (e.g. substracting 1 on the best score).
The difference is very significant. You can play with the number of positions evaluated here:
Hello,
Thank you for a great book :)
I played with the alpha beta code, and I noticed that it's rather slow.
The issue turns out to be rather simple, after comparing with the canonical alpha beta algorithm:
Instead of using
<
one can use<=
in the comparisonsoutcome... < best...
.However for this to work with picking all best moves, one should change the top level to either not do any alpha beta optimizations or to restore the
<
behavior by being over cautious (e.g. substracting 1 on the best score).The difference is very significant. You can play with the number of positions evaluated here:
In summary, for depth 3, we go from 12424 to 1713 positions evaluated for the black move after white move pawn e4.
This change also seems to speed up
alpha_beta_go.py
.Note as well that I am happy to provide a chess hook in the style of the
ttt
directory for tic tac toe. Please let me know if you'd like that.Thank you.