Replicable-MARL / MARLlib

One repository is all that is necessary for Multi-agent Reinforcement Learning (MARL)
https://marllib.readthedocs.io
MIT License
873 stars 141 forks source link

Inferencing the learned Policies #233

Open arshad171 opened 5 months ago

arshad171 commented 5 months ago

Hi,

Firstly, thank you for your efforts, amazing work!

I am currently working with a custom environment and managed to train MARL policies. However, I am grappling with running inferences on the learned policies.

I came across #69 and load_and_reference, but I noticed that while running inference this way, the actions received by the step function (where I decided to log the relevant metrics during inference, since render invokes the training loop, which in turn the step function) do not make sense for the following reasons:

  1. The actions keep fluctuating given the same state, which I believe shouldn't be the case.
  2. The randomness associated with the actions keeps changing when I update the seed in ray.yaml config.

Based on these observations, I am inclined to believe that it is the action noise added to the actions (I am running the PPO algorithm), but I may be wrong. (/My environment does not have any randomness)

May I please know what the right way is to run inference, through MARL interface or directly load the policies via Raylib. And if render is the way to run inference, then how can I get consistent results that do not keep changing with the random seed.

Thanks, Arshad

Morphlng commented 5 months ago

MARLlib's render API is actually a one episode training, so the policy is not actually "inferencing". I would recommend you to only load the model with MARLlib's API, then inference using RLlib's compute_single_action API. For more detail, you can refer to my evaluation script

In general, for stochastic policy, you should set explore=True to acheive the best performance.

arshad171 commented 4 months ago

I can't thank you enough for the script! I had a tweak a few things to make it work for the custom policies I had, but it works like a charm! I was grappling with running inference for quite some time.