Closed yijiezh closed 2 years ago
Hi @yijiezh,
As a ML-Agents user, I also felt the need for a multiagent wrapper. Unfortunately, Gym interface has been designed for single agent environments. I know two solutions to extend the Gym interface to multiagent environments:
I found this second solution more convenient and I extended gym wrapper to support multiagent environments using rllib multiagent interface.
You can find the Python module I developed: unity_wrappers.zip (for ML-Agents release 3).
It works pretty much the same way as ML-Agents Gym wrapper, except that observations, rewards, dones and actions are dictionaries:
# Instanciate multiagents UnityEnvironment
unity_env = UnityEnvironment(file_name="...")
# Wraps it with my wrapper
env = MultiUnityWrapper(unity_env)
# Use it as a Gym environment (with dictionaries instead of np arrays)
observations = env.reset()
# {"agent_1": [0.0, 1.2, 3.0,], "agent_2": ...]
Note that my solution is not perfect and may not work Maybe the ML-Agents team could take inspiration from this solution!
Best
Hi, Thank you for the reply. @Fabien-Couthouis ! I worked on the current UnityEnvironment class and I am very interested in knowing what are the limitations of the current API. It seems that wrapping the UnityEnvironment is very common and I would like to know the reason for this. Is the API hard to use as is?
Hello @vincentpierre ,
In my opinion, wrapping UnityEnvironment class is especially useful to use Unity environments in a RL framework (such as rllib), with prebuilt algorithms (ML-Agents lacks multiagent algorithms such as MADDPG).
The API is not hard to use but I find the distinction between DecisionSteps and TerminateStep leads to a bit too much verbosity. What is more, as I do not use brains, I prefer wrapping the UnityEnvironment class when I implement my algorithms. Thus, this makes the code simpler and my algorithms can work on other environments (those which implement to the rllib multiagent interface).
But it's only my own opinion, keep up the great work!
I agree on making the UnityEnvironment API fully compatible with GymWrapper.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Is your feature request related to a problem? Please describe.
By using the registry interface provided by mlagent-env, the env binaries could be downloaded automatically. Most of the example envs contains multiple agents inside, which cannot be wrapped with UnityToGymWrapper.
e.g. for environment "Hallway"
Describe the solution you'd like Is it possible to provide interface/parameter to configure how many agents are inside the env?