Replicable-MARL / MARLlib

One repository is all that is necessary for Multi-agent Reinforcement Learning (MARL)
https://marllib.readthedocs.io
MIT License
887 stars 142 forks source link

Configuration of custom environment #201

Open BonnenuIt opened 10 months ago

BonnenuIt commented 10 months ago

Hi! When I customize my environment in the MARLlib, I find it difficult to understand the concepts in the env_config file, e.g. mask_flag, global_state_flag, opp_action_in_cc and agent_level_batch_update, and how they work.

It is better to provide some information about how to set the env config.

Theohhhu commented 10 months ago

Hi. Sorry for the confusion. The mask_flag indicates whether there is a mask available for the action. The global_state_flag is used to indicate whether the natural state is provided for use, such as in SMAC, compared to cases where we need to produce it ourselves, as in MPE. opp_action_in_cc is used to include the actions of other agents in the centralized value function, such as the critic of MAPPO. agent_level_batch_update is a switch that toggles between batch-based RL updates or minibatch-based RL updates.

HawkQ commented 9 months ago

Same question here, I want to use my customized env, pure handmade, wish to find some more examples or tutorials, THANKS!

遇到了类似的问题,我想使用我自定义的环境,但add_new_env.py仍不能解答全部疑惑,希望可以在文档中加入更多普适性示例或教程,谢谢!