facebookresearch / BenchMARL

A collection of MARL benchmarks based on TorchRL
https://benchmarl.readthedocs.io/
MIT License
292 stars 42 forks source link

MAgent2 integration #135

Closed JoseLuisC99 closed 1 month ago

JoseLuisC99 commented 1 month ago

Hi! I'm trying to integrate BenchMARL with the MAgent2 environment, but I've encountered some problems. The main issue is that when I execute experiment.run(), I get the following error:

File [~/miniconda3/envs/marl/lib/python3.11/site-packages/benchmarl/experiment/logger.py:134] in Logger.log_collection(self, batch, task, total_frames, step)
    132 to_log.update(task.log_info(batch))
    133 # print(json_metrics.items())
--> 134 mean_group_return = torch.stack(
    135     [value for key, value in json_metrics.items()], dim=0
    136 ).mean(0)
    137 if mean_group_return.numel() > 0:
    138     to_log.update(
    139         {
    140             "collection[/reward/episode_reward_min](/reward/episode_reward_min)": mean_group_return.min().item(),
   (...)
    143         }
    144     )

RuntimeError: stack expects each tensor to be equal size, but got [1] at entry 0 and [16] at entry 1

While I was trying to fix it, I found that the error I'm encountering is likely due to BenchMARL's expectation of equal "done" signals for both adversary groups. This means BenchMARL doesn't currently support scenarios where the number of agents that die in an epoch differs between groups.

I'd like to know if there is a solution or wrapper that can cover this case, or if there is a way to add it.

JoseLuisC99 commented 1 month ago

This issue may be related to feature two in #94 (support for variable number of agents).

matteobettini commented 1 month ago

Hey! Thanks for opening this and I think we can definitely fix this issue. The code in the loggers can defineitely be more flexible and it is about time I seriously improved it.

To understand the issue better I need more context. In particular:

Benchmarl does not currently support this, but it supports some agent being done before others. In the case of some agents being done before others, the rollout countiunes until the global done is set (in pettingZoo this can be computed with any or all on the agent dones). If some agents are done and the rollout continues these agents will be required to continue acting but you can either ignore thier action in the env and give a reward of 0 or mask their actions such that only the no-op action remains available (like in SMAC) (and stll give reward of 0).

If you could provide more details on your env, I'll be able to help.

In general, I will already start making a PR that makes the code snippet you got stuck on more flexible as I think there is a lot i can already improve there.

JoseLuisC99 commented 1 month ago

In this environment, some agents finish before others (when they lose all their health points). Once an agent is done, a truncation or termination flag is set and returned every iteration until the episode ends. This effectively ignores actions from finished agents, as there is no no-op action available.

This causes a problem when logging. The logger sets the done flag using experiment.logger._get_done(group, batch), and then calculates the mean episode reward per group with episode_reward.mean(-2)[done.any(-2)]. Since some groups have more agents that finished earlier, the tensors end up with different sizes, causing an error because torch.stack requires tensors to have the same shape.

matteobettini commented 1 month ago

Very clear got it. I will fix this in #136

matteobettini commented 1 month ago

If you want to try the PR as of now it should work well in collection logging. (you can try without evaluation)

I still have to fix evaluation logging.

matteobettini commented 1 month ago

Thanks for bearing with me.

PR is ready and its description gives an overview of how things are computed now.

Let me know if this fixed your issue

matteobettini commented 1 month ago

Btw if once you finish you would like to contribute MAgent2 to BenchMARL we would love that!

Otherwise, could you share your implementation? I might consider adding its wrapper in the future

JoseLuisC99 commented 1 month ago

Sure, my intention is definitely to contribute. However, I still need to investigate some performance issues within MAgent2. Once I've finished training some models, I'd be happy to share my contributions.