Closed johny-b closed 4 months ago
Hi, I noticed this on the weekend too - the C library has a really weird set of flags which need to be passed which give different outputs making it hard to do a high level wrapper. At some point I had to reconfigure it to fix another bug and I suspect that is what is causing the difference between the README and the current output.
I’ve restructured how this is done in a big PR I’m writing at the moment which cleans up a lot of the old code, I’m just putting it through some testing at the moment and should merge in the next couple of days. Feel free to build that one from source in the meantime
Sorry for the inconvenience! Dom
https://github.com/dominicprice/endplay/blob/557f7ddc944055d7fbdd1b26dc373f7eb058c342/src/endplay/dds/solve.py <- here is the change, the SolveMode will make the output configurable but will default to showing all cards and their score
https://github.com/dominicprice/endplay/blob/557f7ddc944055d7fbdd1b26dc373f7eb058c342/src/endplay/dds/solve.py <- here is the change, the SolveMode will make the output configurable but will default to showing all cards and their score
I changed 2 to 3 in this line, to match SolveMode.Default
, and it works :) Thx!
Btw, this library is really good!
I copied code from this section:
According to the docs, the output should be
but I only get the first line. I tried digging deeper and it seems that results for other cards are always 0:
Just to make it clear, I see the same result, i.e. tricks being calculated only for optimal cards, for every board, not only this particular one.
Is there a way to fix this, i.e. to have trick count for every card? I think I can obtain this result using
analyse_play
instead, but I guess this will be much slower and I want to generate lots of data.My environment: