pytorch / rl

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
https://pytorch.org/rl
MIT License
2.37k stars 312 forks source link

[Feature Request] Include group-level done fields in the VMAS environment #2536

Open thomasbbrunner opened 2 weeks ago

thomasbbrunner commented 2 weeks ago

Motivation

In the current implementation of the VMAS environment, the done fields are only available on the root of the tensordict. However, for training, it is useful to have them in the group-level.

This is also an issue for the TorchRL tutorial scripts multiagent_ppo.py and multiagent_competitive_ddpg.py, which have to manually add the done and terminated at group-level with the right shape

https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/multiagent_ppo.py#L655 https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/multiagent_competitive_ddpg.py#L732

The PettingZoo env implementation does include the group-level done fields.

So, can we also include these fields in the VMAS code? If so, I'd be willing to work on this feature.

Checklist

thomasbbrunner commented 2 weeks ago

Tagging @matteobettini, as you also worked on the VmasEnv.

matteobettini commented 2 weeks ago

Hey thanks for opening this.

So the idea is that the location of the done keys of a multiagent env will tell you how the env operates. For example:

In general the rationale is that, if some of the keys need to be expanded later, this can be done at the time you need it in a lazy way.

Pre-adding this extra keys and creating new entries in the env tds regardless of whether you will need it could lead to extra computation, due to the fact that torchrl will track a larget done_spec and so on.

Another motivation is: Now, by looking at the td coming from a vmas env, you immediatly know that there is not a per-agent done. If instead we add this, a user would not immediately know that it is just a copy of the global that is useful to have for some training operations.

Let me know what you think

thomasbbrunner commented 2 weeks ago

So the idea is that the location of the done keys of a multi-agent env will tell you how the env operates.

That makes sense! And I agree that it is more intuitive this way instead of having the fields everywhere.

However, IMO it is suboptimal to have to manually patch the data so that it is compatible with the trainer. I'd expect the TorchRL environments and the TorchRL trainers to be plug and play compatible.

Or at least to have a default solution for this. I see some options for such a solution:

  1. Adding an env transform to patch the data (I don't think that such a transform exists?).
  2. Adding a postproc for the collector to patch the data (IMO the preferred approach).

What do you think of this?

matteobettini commented 2 weeks ago

Yeah 2 is the best.

we actually already have that transform in our implementations https://github.com/pytorch/rl/blob/997d90e1b4e497707903801addee64a199f1ce1a/sota-implementations/multiagent/utils/utils.py#L21

We could consider extending it and adding it to the lib!

@vmoens

thomasbbrunner commented 2 weeks ago

Oh, this is great! Should we move this transform to the main lib and also use this in the tutorials? I'm open to working on this.

vmoens commented 2 weeks ago

I'm open to it, @matteobettini do you remember why it's not part of the core lib? The doc strings are a bit dull and the name of the transform is a bit too generic to my taste but it'd fit nicely in the lib!

matteobettini commented 2 weeks ago

I don’t remember.

Yes, agreed, we need to extend it and test it a bit bit it would be v useful to have

matteobettini commented 2 weeks ago

We could also consider having a “multiagent” file in the transforms to avoid clutter

vmoens commented 2 weeks ago

Ofc if the usage is only in MARL settings that makes sense