-
你好,请问一下,如果直接采用QMIX(config=qmix.yaml),在5m_vs_6m上训练,需要修改qmix.yaml的文件参数吗?我们采用仓库提供的版本直接在5m_vs_6m上训练。test_won_rate大概在0.2左右,但是论文中可以达到0.9,想请教一下是哪里的问题
![image](https://user-images.githubusercontent.com/49089…
-
Hi Jongho,
Firstly, thank you for sharing your awesome work!
I am trying to get a deeper understanding of the preprocessing steps.
Is it possible, that "2. Estimate Pop's beats" and "3. synch…
-
Thanks for this nice repo. I'm interested in MARL for smac. I have some problem about this repo.
1. In the page 18 of your [arxiv paper](https://arxiv.org/pdf/2106.07551.pdf), you mentioned that "…
-
### What happened + What you expected to happen
ray_results/DangerUAV2_v1/QMIX/QMIX_DangerUAV2_v1_83686_00000_0_2022-10-05_13-49-20/checkpoint_001500/checkpoint-1500
2022-10-13 13:00:31,396 INFO…
-
Hi I want to use this library for Multiagent RL, in `AgentMAPPO.py` file there are two undefined references
`from elegantrl.agents.net import ActorMAPPO, CriticMAPPO` `ActorMAPPO` and `CriticMAPPO`…
-
Hi I am trying to run a basic MARL setup using MAPPO.
Here's my yaml config file
```group: "PayloadCarry"
name: "mappo_payload_carry"
training:
interface:
type: "centralized"
p…
-
Hello. I am researching MARL communication, and I am very interested in your MASIA algorithm. I tried simulation using the code you posted here, but there was a problem, so I'd like to ask for your he…
-
Hi there,
I'm using QMix with a custom environment, and training seems to occur, i.e the value losses are updated, but the Evaluator steps and episodes, as well as the Executor steps and episodes r…
-
### High Level Description
I was building a multi-agent scenario using `smarts.env:hiway-v1`, but I found that whenever I called` env.reset()`, the environment would **return fewer agents** than I ha…
-
Hi,
Thanks for the great code! However, when I run the given command to train MPE:SimpleSpread task, it seems the converged performance is far from the results on the paper. For example, the averag…