Open wspeirs opened 3 years ago
Thanks, that's a useful feature.
My concern is that it has some overhead even when unused. It would be great to have a (type-safe) way to opt-in. Not sure how to design it. Maybe something like ZobristHashed<Chess>
is possible?
I think the overhead is fairly minimal... a few table lookups and some XORs. That said, I'm not sure I'm getting the information (square and piece mostly) in the most optimal way in all instances, so I'd love some feedback on that.
As for creating a wrapper type, it would probably be a trait or something that has it's own do_move
implementation. Scratching my head a little on how to do. Could also add an Option<&mut u64>
to do_move
... if the Option
is None
, then no computations is done; however, if it's Some
then it is... what do you think about that? Thanks!
I am also not sure about the design. I feel like enabling hashing should probably not be a runtime decision, i.e. not an optional argument, because the developer will know if they need hashing or not at compile time.
These particular benchmarks are very important to me:
$ cargo bench
bench_shallow_perft
Instructions: 13125195 (+5.163950%)
L1 Accesses: 17353390 (+6.338914%)
L2 Accesses: 235 (+24.33862%)
RAM Accesses: 535 (+26.77725%)
Estimated Cycles: 17373290 (+6.358436%)
bench_play_sans
Instructions: 84400 (+12.36105%)
L1 Accesses: 114228 (+13.82960%)
L2 Accesses: 191 (+12.35294%)
RAM Accesses: 764 (+18.44961%)
Estimated Cycles: 141923 (+14.66209%)
Usually it is hard to improve even a single percent point while keeping the code somewhat idiomatic, so a multiple percent regression is quite significant. It may not seem like much, but for example it would add hours when processing a large database like https://database.lichess.org/ with https://github.com/niklasf/rust-pgn-reader.
I had not run cargo bench
... so thanks for this! I notice on the README that jordanbray/chess is slightly faster... any areas that we could improve shakmaty? My thought being it could net-out if we add zobrist.
Regardless, I'll think on ideas on how to add hashing (or not) during compile-time and keep these benchmarks in-tact. I'm just happy I have it working.
I would also love to see an efficient unmake_move
, but that's another issue for another time...
any areas that we could improve shakmaty? My thought being it could net-out if we add zobrist.
No concrete ideas, but that would be awesome. I'd probably be greedy, though, accept the performance improvements and still reject hashing with overhead :smile:
Ha ha ha... totally understood. I'll chew on this and see what I can come up with. Thanks again!
@niklasf sorry for the long pause on this... life got in the way :-)
I thought about this, and I think I have a solution that'll make everyone (well maybe not everyone, but you and me :-) ) happy. I'll create a new struct Zobrist
that will take a generic <P>
and provide an implementation for Position
for Zobrist<P>
such that it computes the updated hash, then calls the normal method: play
or play_unchecked
. I will also add a method to get the hash value.
This way if someone wants Zobrist hashing, they simply create that struct: let z = Zobrist { pos: chess };
. If they don't care about Zobrist hashing, they would create the Chess
struct like normal.
Before I go too deep on this, let me know your thoughts... playground here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=c440472cedfae9d4acb2119568057cd5
Yep, that sounds great.
@niklasf I've updated my repo: https://github.com/wspeirs/shakmaty
I ran cargo bench
, and it's reporting an increase on the two benchmarks you care about... but I'm not sure they're accurate, as I don't know what they're compared against. Have a look and let me know what you think... thanks!
I made some progress on what I think is a reasonably clean interface. The main trait is no longer just a marker, but also contains the implementation. There's an additional trait ZobristValue
, which is implemented for u8
, ..., u64
, u128
, to support wider hashes. The tables are now hardcoded, so that the lower 64 bits match the Zobrist table of Polyglot opening books.
prepare_incremental_zobrist_hash
and finalize_incremental_zobrist_hash
for incremental Zobrist hashing.
Option<V>
to V
.
I'm working on adding Zobrist hashes to shakmaty: https://github.com/wspeirs/shakmaty
@niklasf would love your feedback on what I have thus far. I still need to implement it for all the variations; however, I have
do_move
complete so it shouldn't be too bad. I have a unit test that computes 100k positions and ensures there are no collisions. I've also gone up to 1m, but it takes a while unless you're running on release.Let me know how to proceed and any feedback... thanks!