-
demo_ Maddpg does not work. Can someone give a demoof multi-agent algorithm implementation? Such as mappo Thank you very much.
-
In a multi-agent setting, when training e.g. `MAPPO_Agents()`, then calling `MAPPO_Agents.save_model(model_name='model.pth')` and finally loading the model `MAPPO_Agents.load_model(path)`, how can I e…
-
Can anybody provide an example or demo of MARL within FinRL?
Thanx.
-
### Question
I checked the tutorials on your website, only to find some single-agent RL algorithm libraries (SB3, Clean RL, etc). If any, what MARL libraries would you recommend when adapting my cust…
-
Do you plan supporting environments for Multi-Agent RL in the near future?
This would be a key feature in my choice for a RL library.
-
@TomorrowIsAnOtherDay do liftsim simulator support MARL algorithms like MADDPG? If so could you provide any reference baseline implementation that i could refer to
-
three other algorithms: Bi-AC, MACPO, and MAPPO-LI’ve been exploring the CSQ and CS-MADDPG algorithms in your repository, and I’m impressed by their performance. However, I noticed that the baseline a…
-
运行MAPPO_MPE示例代码时,simple_spread_v3.yaml文件中没有规定agent的数量
-
I have two questions, for which I could not yet find how to do it:
1) how can I extract specific policies out of a trained multi-agent model? e.g. when 3 agents were trained, "agent_1", "agent_2" a…
-
- [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] RL algorithm bug
+ [ ] documentation request (i.e. "X is missing from the documentation.")
+ [ ] ne…