google-deepmind / hanabi-learning-environment

hanabi_learning_environment is a research platform for Hanabi experiments.
Apache License 2.0
646 stars 149 forks source link

API documentation #12

Open karhohs opened 5 years ago

karhohs commented 5 years ago

Hi!

I couldn't find documentation for the API. Does it exist?

Also, the README.md states the API is similar to OpenAI Gym. Would someone please share why OpenAI Gym was not sufficient for this project?

Thanks!

mgbellemare commented 5 years ago

Hi,

Unfortunately, what's here is what there is. The example agents hopefully give some insight into the API.

As to your other question, I'm not exactly sure what it's asking -- OpenAI Gym doesn't provide a Hanabi environment per se. AFAIK Gym also doesn't support multiplayer games.

Best,

lanctot commented 5 years ago

To the first question I would just add, as a starting point check out game_example.cc or .py. They are simple and demonstrate the core methods.

Also pyhanabi.py has doc strings for most of the non-obvious methods.

Is there anything in particular that is unclear?

On Wed, Mar 13, 2019, 8:14 PM Marc G. Bellemare, notifications@github.com wrote:

Hi,

Unfortunately, what's here is what there is. The example agents hopefully give some insight into the API.

As to your other question, I'm not exactly sure what it's asking -- OpenAI Gym doesn't provide a Hanabi environment per se. AFAIK Gym also doesn't support multiplayer games.

Best,

  • Marc

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deepmind/hanabi-learning-environment/issues/12#issuecomment-472677199, or mute the thread https://github.com/notifications/unsubscribe-auth/ADv0r3ZTTYEjVAKTP904gH80FRtazXaQks5vWbBygaJpZM4bx-HO .

karhohs commented 5 years ago

Thanks for your explanations. It wasn't clear to me where to start, so the guidance to begin with game_example.cc or game_example.py is very helpful.

WRT my second question, the Hanabi Learning Environment has potential to be used as a template or gold-standard for developing learning environments for other games. I was hoping to get a sense if this is encouraged, especially for multiplayer games.

lanctot commented 5 years ago

No problem, maybe we can add a pointer to those files in the README.

On the comment to the second question: I think for this code base will continue to focus on Hanabi, but yes I agree that it seems like a good way to do multiagent games generally. To the best of my knowledge, as Marc said part of the problem with gym compatibility is limited support for multiagent games. For example, the game environments, like Blackjack and (previously?) Hex, assume specific policies for the opponents. From my experience, to do MARL in games, you need two things: (i) the environment needs to handle the specific use case of multiple decision-makers, (ii) the algorithms/agents need specific support for this as well. For example, in (i) the environment may have to handle turn-based games and simultaneous games (like gridworlds) quite differently. For games specifically, in (ii) you need the learning algorithms to handle things like a subset of the actions being legal vs others illegal.

It would be great to have gym environments for multiagent games, but AFAICT these features do not exist yet. People have been talking about adding support for it, though: see https://github.com/openai/gym/issues/934 .

On Wed, Mar 13, 2019 at 10:03 PM karhohs notifications@github.com wrote:

Thanks for your explanations. It wasn't clear to me where to start, so the guidance to begin with game_example.cc or game_example.py is very helpful.

WRT my second question, the Hanabi Learning Environment has potential to be used as a template or gold-standard for developing learning environments for other games. I was hoping to get a sense if this is encouraged, especially for multiplayer games.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/deepmind/hanabi-learning-environment/issues/12#issuecomment-472696349, or mute the thread https://github.com/notifications/unsubscribe-auth/ADv0r77DlOXcMT-k35lgk4ssHqB2aGVcks5vWcoJgaJpZM4bx-HO .

stonecoder19 commented 5 years ago

anybody understand what the output of train.py means?