Unity-Technologies / ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
https://unity.com/products/machine-learning-agents
Other
17.12k stars 4.15k forks source link

Multi-agent vectorized environment support for gym wrapper. #4120

Closed abhayraw1 closed 2 years ago

abhayraw1 commented 4 years ago

Is your feature request related to a problem? Please describe. I currently installed ml-agents to use for a research project. My use case is a multi-agent scenario involving coordination amongst the agents. The gym wrapper however only allows for single agents to be used as of now.

Also having a bunch of environments in parallel that can be handled by the gym wrapper would be awesome!

Describe the solution you'd like Currently the low level python API, the UnityEnvironment class provides access to multiple agents. Exposing this to the gym wrapper would be really helpful. One of the implementations for a multiagent environment that I personally like was in ray-rllib here

For multiple environments, a vectorized approach could work like the openAI's VecEnv link

As I am going to be developing workarounds for my project, I would like to contribute towards this goal. As of now I am going to developing the solution to this as close as the ray-rllib 's implementation. Inputs and critiques are welcome!

Hsgngr commented 4 years ago

Hi, ray-rllib link is broken

abhayraw1 commented 4 years ago

@Hsgngr I updated the link!!

abhayraw1 commented 4 years ago

As of now I have zeroed in to this snippet of code that raises an exception whenever the number of agents is greater than one in the UnityEnvironment

https://github.com/Unity-Technologies/ml-agents/blob/3c2fa4d8d1cd981e9cef6b2e2fdb2f77757983c3/gym-unity/gym_unity/envs/__init__.py#L267-L272

Is there any reason why this is the norm? Is it just to make sure that the environments are compatible with the standard RL libraries like OpenAI's baselines and Dopamine?

abhayraw1 commented 4 years ago

Hi @xiaomaogy, So I've currently managed to get the data from multiple agents by bypassing the above mentioned check. For stepping I change the following line https://github.com/Unity-Technologies/ml-agents/blob/3c2fa4d8d1cd981e9cef6b2e2fdb2f77757983c3/gym-unity/gym_unity/envs/__init__.py#L169

using (-1, action_size) instead of (1, action_size). The check for whether the number of agents match is done in the set_actions method of the UnityEnvironment class so I didn't enforce any checks as of now. https://github.com/Unity-Technologies/ml-agents/blob/20527d10121b68c60b490468eafed0465df498e3/ml-agents-envs/mlagents_envs/environment.py#L338-L345

The issue that I am facing now is that when the episode ends, I only get the observations from the agent that is responsible for termination.

My use case however is quite different. I want the episode to be agent dependent, and even if some agent might "die", the rest of the agent should continue. The dead agent would spawn somewhere else in the map! Is this achievable? And could you give me some pointers on some of the possible pitfalls that I should look out for. Thanks in advance!!

abhayraw1 commented 4 years ago

This snippet is actually responsible for sending the observations when some agent reaches its terminal condition. https://github.com/Unity-Technologies/ml-agents/blob/3c2fa4d8d1cd981e9cef6b2e2fdb2f77757983c3/gym-unity/gym_unity/envs/__init__.py#L175-L180

In a multi-agent/vectorized setting would it be okay to return the observations/rewards/dones considering both decision_step and terminal_step rather than only one?

P.S.: One workaround that I can think of for my particular use case is to not call the EndEpisode() method in the C# script for the agents. But then, I do need the information whether the agent terminates or not. I don't know if that makes sense!

laszukdawid commented 3 years ago

In a similar question to the original poster: is there any reason why the multi agent isn't supported? Code changes to make the gym-like API support isn't difficult so I'm trying figure out whether I'm missing something or this is purely conceptual difficulty.

laszukdawid commented 3 years ago

Since I need this for my own purpose, I've added my own wrapper (based on Unity's wrapper) which can be found here https://github.com/laszukdawid/ai-traineree/blob/master/ai_traineree/tasks.py#L101 (or with associated commit https://github.com/laszukdawid/ai-traineree/commit/39dcf3188d0b14853508c48f63416a2df7a94a7e).

I'd appreciate any reply from Unity's team. I'm planning on adding more support for Multi Agent use cases and wouldn't mind contributing a bit.

Ademord commented 3 years ago

@laszukdawid can you provide a simple collab to learn how to use your wrapper? i am in the dark with the Python API and Gym Wrapper's outdated documentation.

laszukdawid commented 3 years ago

For log continuuity: I replied to Ademord on an issue they created in my deep reinforcement learning repository. I'm happy to assist with things I can assist.

dynamicwebpaige commented 2 years ago

Are there any updates for this issue? It would be great to see support for Ray's RLlib in ML Agents - particularly multi-agent reinforcement learning.

xcao65 commented 2 years ago

Sorry, we are not currently supporting multi-agent vectorized env for gym wrapper.

dynamicwebpaige commented 2 years ago

Understood, and thanks for the update, @xcao65!

github-actions[bot] commented 1 year ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.