salesforce / warp-drive

Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
BSD 3-Clause "New" or "Revised" License
465 stars 78 forks source link

How to implement CTDE-based MARL algorithms on the platform? #61

Closed fmxFranky closed 1 year ago

fmxFranky commented 2 years ago

How to implement joint-learning-based MARL algorithms(e.g. MAPPO, QMIX, etc.) but not independent-learning-based algorithms(such as ppo implemented in the paper) on warp-drive? Dow you have the plan to give some tutorials about this? Thanks alot~

Emerald01 commented 2 years ago

Hi, Thank you for asking.

This is a great question. The short answer is that the training algorithm is entirely independent of the WarpDrive core since the output of WarpDrive has 100% compatibility of Pytorch tensor. The main contribution of WarpDrive is that it provides extremely high throughput to generate large training batches from environment simulation in GPUs, while we rely on mature optimizers provided by such as Pytorch to train. If you look at the loss function signature of the trainer, for example the ppo: https://github.com/salesforce/warp-drive/blob/f478be5cee510e54ceeb598e7dc059a0e44b3891/warp_drive/training/algorithms/ppo.py#L42

def compute_loss_and_metrics(
        self,
        timestep=None,
        actions_batch=None,
        rewards_batch=None,
        done_flags_batch=None,
        action_probabilities_batch=None,
        value_functions_batch=None,
        perform_logging=False,
    ):

Those WarpDrive generated batch are Pytorch tensors with the shape [num_steps, num_envs, num_agents_per_env, *]. Therefore at this step, you can implement any MARL algorithms with Pytorch, as you would do if your step function is written by Python or loaded from Gym, for example. We are considering to strengthen the collection of training algorithms, and you are also welcome to contribute.

fmxFranky commented 2 years ago

Thanks a lot~I will have a try on it.

fmxFranky commented 1 year ago

Hi, i am trying to implement CTDE methods in the repo, following common-used settings in MARL context, i.e. Dec-POMDP, i set use_full_observation into False to force each agent has to access partial obsevations. My question is how to get extra gloabl state in an elegant way when interacting with the cudaenv? Thanks a lot^^

Emerald01 commented 1 year ago

what is the "extra global state" in the definition, I think I know what you wanna ask, but I need to confirm your question before giving you the answer. Thank you.

fmxFranky commented 1 year ago

I mean o, s, r, d = env.step(a), i.e return both individual observations and shared global state from the interface.

Emerald01 commented 1 year ago

Those training batches are obtained from the data manager and interpreted as torch tensor, please note that I just updated a few batches definitions, so they become more clean, please pull the latest main branch. How to use them, just follow this: https://github.com/salesforce/warp-drive/blob/master/warp_drive/training/trainer.py#L642

if you want to understand a few details, here are a few comments, mostly you can find them here https://github.com/salesforce/warp-drive/blob/master/warp_drive/training/utils/data_loader.py#L73

(1)Each batch is policy-wise, the name is like this f"{_ACTIONS/_REWARDS/_DONES}_batch" + f"_{policy_name}", and the agents belonging to the corresponding policy will show up there. for example, in tag_continous, you shall have rewards_batch_runner and rewards_batch_tagger if your configuration has both runner and tagger policies

(2)The shape of those batches are (training_batch_size_per_env, num_envs, policy_tag_to_agent_id_map[policy_name], ...), and training_batch_size_per_env = training_batch_size // num_envs

(3) f"{_PROCESSED_OBSERVATIONS}_batch" + f"_{policy_name}" is the flattened observations that model.forward() used for the input layer of the neural net. It will output the the [action_probabilities for each action category], value_functions. The flattened observation is basically flattening original observation space which might be a gym.Box of gym.Dict to something like (n_env, n_agent, np.prod(observation_space.shape)).