wenkesj / holdem

:black_joker: OpenAI Gym No Limit Texas Hold 'em Environment for Reinforcement Learning
162 stars 62 forks source link

reset_stack() method and self.betting appear to be redundant in player.py? #16

Open BigBadBurrow opened 5 years ago

BigBadBurrow commented 5 years ago

reset_stack() method doesn't appear to be referenced

self.betting defined in player.py appear to be redundant (it's only ever set to False), but care needs to be taken if removed as it's used in player_features[], so removing it will change the indexes of the list, which may be used elsewhere.

VinQbator commented 5 years ago

I'm using reset_stack() to reset stacks after every episode of training. As you can see from the name, it does not have underscore before it - thus it is public.

Can't comment on self.betting yet

BigBadBurrow commented 5 years ago

Okay, just beware you've hard-coded it to 2000 and it's possible to initialise players which varying stack sizes via env.add_player()