Closed shantanukarve closed 3 years ago
Sorry for the super late response.
Event though it's probably too late i just want to say that it's been a long time i worked on this project and i'm not too familiar with the specifics anymore. From what i see the update to your post is one reason. Accurate calculations for the flop are expensive and are kept very low in the example configuration file. The cluster abstraction only uses two buckets on the flop also. Which is a very low number.
I ran some tests of holdem, nolimit, 2player, 1|2 small blind |bigblind, 200|200 stack, maxRaises of 3 4 4 4 games. During cluster abstractions runs for all tests I kept the nb-samples to "0,2,500,500", the buckets to : 169,5,10,500, the error bounds to : .01,.01,.01,.01, the nb-hist-samples-per-round to 0,1,200,200. For all tests I held the action-abstraction to polrelative at 0.4,0.8,1.2,2,5,9999 raises. For cfr learning I had 12 threads and times of 8 hours and sometimes 16 hrs and 24 hrs.
I ran the head to heads, specifically NSSS against each of the NOOO, NEES, NEEO. I expected NSSS to perform the wost, meaning lose money, i.e. negative average wins and NEEO to be best. I'm getting NSSS to be the best ! Here's a table of results. As you can see I ran cfr's learning phase for the most sophisticated strategy, NEEO, for longer and longer times, so 8 hrs then 16 hours then 24 hours but that didn't change things. Any ideas of what to experiment with to get the results to align with expectations - meaning NEEO, NEES, NOOO to be all better than NSSS. Update: thinking harder, I'm wondering if the clustering abstraction is too coarse so I need to increase the fineness, by increasing the nb-samples and the nb-hist-samples. Any ideas on combinatrics around this to see what's appropriate ?