Closed JohnAD closed 7 years ago
@Zulko
Hooray! Finished debugging the code! Still don't merge though. I need to update documentation first. I will add more comments as well.
BTW, do you want the version to be '0.0.6' or do you want take this to a stable release number, such as '1.0.0'?
BTW, I did modify the original 'Negamax' slightly. See
https://github.com/Zulko/easyAI/pull/21/files#diff-74bb0325c1491d187c46b5fd3b7e1b0f
When debugging on paper I was able to determine that sometimes, in fairly rare border cases, the function's returned bestValue
would not correspond with the best_move
that has that value. It seems to mostly affect leaf nodes so it very rarely produces a wrong answer.
There is also now a ai_unittest.py
script in the root directory. Running it will do tests against all of the AI's to verify integrity.
Finished! Testing works and I've updated the documentation.
Two items to point out:
I updated version to 1.0.0.1 from 0.0.0.5. If you don't want me to do that, I can make it 0.0.0.6. IMO, this is a stable library.
I'm not 100% sure of my description of DUAL in the documentation changes. See https://github.com/PurpleSquirrelGames/easyAI/blob/master/docs/source/ai_descriptions.rst
Let me know if you would like to see additional changes (or want me to undo some of the changes I made.) Otherwise, it is all ready to merge.
@Zulko
I'll have quite a bit of time this weekend to work on this. So, if you have a chance, let me know if there are additional changes you would like me to make/unmake. :)
John
Since the PR had not yet been accepted, I went ahead added another commit that handles the unusual case of asking for a depth of 0 of the NonRecursiveNegamax. Strictly speaking it really isn't negamax anymore at depth zero. So, it is not a critical fix at all; but it is good to be thorough, IMO.
This is not ready for merger. It is simply a point of progress for this project.
There is, in fact, some kind of fundamental flaw in my NR version.
The test suites run a game of Knights for each different AI (see file
ai_unittests.py
). The regular and NR versions should choose the same moves. And they do until deep into the game: move 18. So there is some border case that is being handled differently. Can't for the life of me see why. I suspect my version is being more aggressive in the alpha/beta pruning; but that is a wild guess at this point.Next step: run an end-game by hand on paper. Get exact answer. Compare against both. It might take me a while...