Open RemiFabre opened 9 months ago
Hello @RemiFabre! Thank you for showing interest in this project
To start - something I forgot to add to the readme - this was presented at a workshop, and the corresponding paper might offer some extra explanations: http://id.nii.ac.jp/1001/00207567/
To answer your question directly, the "default rules" expect a 4-player game. As only two players were given, it has to create groups of 4, and the mode "All" will create all possible groups (well, except the ones with always the same player), namely the following:
The option -g 5
means all groups will play 5 games (thus the Games per group: 5
), and each player will therefore play 1×5 + 2×5 + 3×5 = 30 games (thus the Games per player: 30
)
Since games are 4-player, the average win-rate should be 25% (looking at the sum in this context shouldn't give 100%)
Lastly for your last question, to perform duels the only thing really needed is to set the number of players in the rules to 2, and for that... I don't think I put an option to do it in the ceramic-arena
script... I will try to add that. It should also be pretty easy to do in python, you can use the tests tests/arena/test_arena.py as inspiration, but I will give you a working example as soon as I get this working again (I hope I didn't forget everything)
So, that felt like a weird time jump. I want to do some clean-up now (such moving to pyproject.toml
module definition), before putting the rest of the project up-to-date (such as adding tests for the latest versions of python)
In any case, I added a quick fix to handle duels in the branch arena-improvements (I will add it to the main branch after doing the clean-up). If you want to run duels / set the number of player to two, you can use -p 2
, so for your example, you can do:
./ceramic-arena fl mc -g 5
Which, on my computer, gave me
Mode: All
Played 1/1 (5/5)
Games per group: 5
Games per player: 5
Total time: 4.6175e+06 µs (real), 3.6940e+07 (times thread count)
Time: 9.233e+05 µs (game), 1.631e+04 µs (step), 1.795e+00 µs (state change)
Average moves per game: 56.6
player | winrate | avg | std | move time |moves
---------------------+---------+-------+-------+-----------+-----
first-legal | 20.00% | 29.6 | 8.5 | 5.319e+00 | 28.8
mc-1000-h(0.2,0.2,0) | 80.00% | 43.0 | 15.6 | 3.320e+04 | 27.8
I mentioned in the above message it was also possible to do the same in python, here is the code (it should work in both branches)
from ceramic.players import MonteCarloPlayer, RandomPlayer
from ceramic.arena import AllArena, PairsArena
from ceramic.rules import Rules
rules = Rules.BASE
rules.player_count = 2
players = [RandomPlayer(), MonteCarloPlayer(rollouts=100)]
arena = PairsArena(rules, players)
arena.count = 10
arena.run()
arena.print()
Tell me if this answers your question
Amazing! Thanks a lot for your answer and your work. I'll dig into this more seriously soon, some questions after a fast read of the paper, apologies if I missed the answer in the paper:
Ok so I read the paper, good work!
In the paper you give some elements of the dimension of the game tree: 1) " The number of legal actions decreases throughout the rounds, but can start at 228". 2) "We can see in Table 6 that, on average, a player will have access to 44.7 legal actions, or 31.6 actions if we ignore the ones that place the tiles directly in the floor when placing them on the pyramid is possible. However, agents may have to choose among larger sets. Empirically, mc*-10 000 has more than 5% chance to encounter more that 100 smart legal actions"
If you have more data on this, I'd be interested in reading it. My intuitions are:
Also here is a repo I worked on with some statistics about the game using a modest brute force method: https://github.com/RemiFabre/azul_statistics
Printing the number of legal moves in a game between 2 default MC:
number of legal actions: 108
number of legal actions: 102
number of legal actions: 77
number of legal actions: 47
number of legal actions: 32
number of legal actions: 18
number of legal actions: 15
number of legal actions: 12
number of legal actions: 10
number of legal actions: 7
number of legal actions: 2
Round tree size: 578676084940800
number of legal actions: 71
number of legal actions: 54
number of legal actions: 50
number of legal actions: 31
number of legal actions: 22
number of legal actions: 21
number of legal actions: 14
number of legal actions: 8
number of legal actions: 6
number of legal actions: 5
number of legal actions: 3
number of legal actions: 1
Round tree size: 27674916192000
number of legal actions: 61
number of legal actions: 50
number of legal actions: 35
number of legal actions: 22
number of legal actions: 14
number of legal actions: 9
number of legal actions: 7
number of legal actions: 4
number of legal actions: 3
number of legal actions: 2
Round tree size: 49713048000
number of legal actions: 41
number of legal actions: 35
number of legal actions: 26
number of legal actions: 19
number of legal actions: 10
number of legal actions: 9
number of legal actions: 7
number of legal actions: 5
number of legal actions: 3
number of legal actions: 2
Round tree size: 13398021000
number of legal actions: 39
number of legal actions: 30
number of legal actions: 21
number of legal actions: 19
number of legal actions: 18
number of legal actions: 17
number of legal actions: 12
number of legal actions: 7
number of legal actions: 6
number of legal actions: 4
number of legal actions: 1
Round tree size: 287985559680
number of legal actions: 35
number of legal actions: 32
number of legal actions: 24
number of legal actions: 18
number of legal actions: 10
number of legal actions: 9
number of legal actions: 6
number of legal actions: 4
number of legal actions: 3
number of legal actions: 1
Round tree size: 3135283200
So it's still a tad big :) For now !
It seems you have already answered most of the questions yourself ;), but for my input:
Humans still heavily outperform the best agent coded. This is surprising to me. What approach would you recommend to create a super-human agent?
They outperform the best agent in the paper (monte-carlo-based) that, while not that bad, is still less advanced more recent machine-learning based ones. Its main advantage is being a good baseline without immediate exploits
Given the size of the decision tree, are we far from solving an entire round? By this I mean being able to do a deterministic search (minimax+alpa-beta prunning for example) until the end of the tree, at the scope of 1 round. We would need to give a heuristic function to score the state of a round. Since I play the game a lot and I have theories I'd like to check, I would really love to hand craft several heuristic functions and see what works and what doesn't
For a single round in a two-player game (especially the last ones, where there are less possibilities), it might be possible. But you showed the numbers, they are still quite big. It might be possible to reach a very good solution with a smart pruning, either by hand-crafted heuristics, or a machine learning-based one. If the code is clear enough, I hope you'll be able to do that with this package ;)
(There still remains the challenge of having a good heuristic for the end of a round)
If you have more data on this, I'd be interested in reading it. My intuitions are:
- That in a duel setup the game tree is much smaller
- That in a round the game tree decreases in size very rapidly with the number of turns played. => Therefore, if we can find heuristics for then opening moves (which I believe we can), maybe it is possible to reach the end of the tree for a round in a reasonable time? And if not, these heuristics could help prioritize the paths to explore first. Eventually the heuristics could be learned and not fed by an expert. Nothing new in this idea, but I'd be interested in your opinion about it feasability in this use case.
It seems the numbers showed your intuitions were correct! I believe that introducing a lot of hand-crafted heuristic will likely result in a less-than-optimal policy, but surely it will be better than the baselines currently present. I think that yes, it is feasible to have pretty good results with way, although it's hard to tell now how it would compare to humans. If you're still going for minmax with alpha-beta pruning, solving the end of a round according to an heuristic (and perfectly the last round) could give it an edge
They outperform the best agent in the paper (monte-carlo-based) that, while not that bad, is still less advanced more recent machine-learning based ones. Its main advantage is being a good baseline without immediate exploits
If I may ask, what would be your approach to try to make the best agent possible using the tools we have today?
Since we have access to an environment in which to run simulations I would, in the case of 1 vs 1:
I'm of course not sure how well this would work, but I expect it to be a simple straight-forward way in obtaining a very good agent
In the case of 3 or 4 player, the path is less clear for me. The above strategy could work too by considering all opponents behave more or less like the agent (and thus pruning their actions the same way), but if the assumption doesn't hold I would look into Multi-Agent Reinforcement Learning algorithms
Very interesting! Thanks for the detailed answer.
Hi, First of all thank you for your work, this is great stuff.
I'd like to compare the performance of 2 agents in a series of 1 v 1 games. For example mc vs fl, I'd expect this would do it:
But it generates this:
What are "groups" ? I'd expect only 5 games to be played per player, not 30. And why do the winrates not sum to 100%?
Clearly I did not understand something about the call options, please clarify if there is a way to perform duels. Best,