How can I modify the origin env such as players' origin chip num or game settings like ante(every player have to put some chips into pot which is different from a standard game).
As my understanding, if I want to train a good no-limit hold'em angent which can achive human-level, I should initiate the env with 8 agents and all these agents should use cfr algorithm ( in the example code, you use cfr agent playing against random agent, while as I see this is for a instant effect proving the RL algorighm runs well)
After the model is trained, can I integrate it in my code if I input a certain situation in a well-suited format?