I want to use qmix with DeepDrive-Zero environment. Agents in this environment don't act simultaneously, one agent at each time-step. I tested other single-agent RL methods like PPO and they work fine.
How can I use QMIX for this environment? Should I write a wrapper to convert it first to multi-agent RL and change action_space and observation_space to dict and then use MultiAgentEnv.with_agent_groups()?
I tried this but again I get the following error. It seems it doesn't convert it to tuple.
ValueError: Obs space must be a Tuple, got Box(29,). Use MultiAgentEnv.with_agent_groups() to group related agents for QMix.
It's not also clear to me how to group agents. I use sth like this:
@kargarisaac were you able to solve this? I have a similar use-case with Unity ML Agents' SoccerTwos env. Perhaps @sven1977 has any updates or workarounds?
Hi,
I want to use qmix with DeepDrive-Zero environment. Agents in this environment don't act simultaneously, one agent at each time-step. I tested other single-agent RL methods like PPO and they work fine.
How can I use QMIX for this environment? Should I write a wrapper to convert it first to multi-agent RL and change
action_space
andobservation_space
todict
and then useMultiAgentEnv.with_agent_groups()
?I tried this but again I get the following error. It seems it doesn't convert it to tuple.
It's not also clear to me how to group agents. I use sth like this:
Agents in my env don't have specific names. So I don't know what to put in "group1" list.
Any hint or help would be appreciated. Thank you