Open VictorXjoeY opened 3 years ago
It's the Iterative Deepening that guarantees that the AI will always take the shortest path to victory because the move is returned as soon as it hits a depth in which it finds victory. This means that we will encounter issues if the AI has "memory", i.e.: if the DP is not empty when Minimax::get_move
is called.
Before adding the DP I should probably add the move-sorting heuristic by saving moves inside Minimax for each state and sorting them according to the score (according to the better_max
and better_min
actually).
I removed it because there were some issues with the whole pruning logic but it's now time to add it back!
The DP should look like this:
And let's make sure we treat cycles properly. Likely with something like this:
We might need to consider the pruning values
alpha
andbeta
for storing pre-calculated states in the DP, therefore the code below is likely incorrect but it's a starting point.