Closed acxz closed 2 years ago
I spent some time searching for one and it looks like openspiel open_spiel (a collection of environments for board games by deepmind) has one for RBC: https://github.com/deepmind/open_spiel/blob/master/docs/games.md#reconnaissance-blind-chess
can you find any demo in openspiel? i can't use it in my python project
btw,it is highly recommended to organized the code in the way of openai-gym, feature engineering waste me lot of time
can you find any demo in openspiel? i can't use it in my python project
it would be appropriate to ask this at https://github.com/deepmind/open_spiel
btw,it is highly recommended to organized the code in the way of openai-gym, feature engineering waste me lot of time
while this is off-topic, i'll answer in brief
there are limitations that openai-gym environments have, which make them hard to scale in production. Many RL libraries end up rolling their own extensions or their own environment solutions for example see rllib's Environments. I'm sure open_spiel had their own rationale and again it would be appropriate to ask at their repo.
thank you for your reply
Thanks for the comments. I see two major issues with treating RBC as a reinforcement learning environment using an OpenAI-gym style API:
trout.py
would be another... and so on.This second point in particular is why feature engineering may be a significant part of a machine-learning agent for RBC. It's not time wasted, it's research!
I would just like to point out that, while the above two points are valid concerns for vanilla OpenAI Gyms, recent RL environments are able to handle both of these two points, such as OpenSpiel.
There is a RL environment for RBC out there that you can use to train RL agents in a plug and play fashion (linked above).
OpenSpeil is a great library for this sort of thing! However, it uses its own API, not the openAI-gym API, since it is made for multiplayer games and not static RL environments. When player strategies are not specified, they default to random. You can train your own agent using those tools, but be aware that it is training against a fixed opponent strategy! Typically, you would iteratively train an agent against previous versions of itself in order to advance from training against random actions to training against a strong opponent. In any case, this is not a single RL environment and it is still not well-suited to an openAI-gym style API.
Typically, you would iteratively train an agent against previous versions of itself in order to advance from training against random actions to training against a strong opponent.
OpenSpiel does support self play.
It would be amazing if an environment (such as an openai-gym environment) can be provided that makes it easy to plug and play with reinforcement learning libraries.