Open arshad171 opened 5 months ago
MARLlib's render
API is actually a one episode training, so the policy is not actually "inferencing". I would recommend you to only load the model with MARLlib's API, then inference using RLlib's compute_single_action
API. For more detail, you can refer to my evaluation script
In general, for stochastic policy, you should set
explore=True
to acheive the best performance.
I can't thank you enough for the script! I had a tweak a few things to make it work for the custom policies I had, but it works like a charm! I was grappling with running inference for quite some time.
Hi,
Firstly, thank you for your efforts, amazing work!
I am currently working with a custom environment and managed to train MARL policies. However, I am grappling with running inferences on the learned policies.
I came across #69 and load_and_reference, but I noticed that while running inference this way, the actions received by the
step
function (where I decided to log the relevant metrics during inference, since render invokes the training loop, which in turn thestep
function) do not make sense for the following reasons:ray.yaml
config.Based on these observations, I am inclined to believe that it is the action noise added to the actions (I am running the PPO algorithm), but I may be wrong. (/My environment does not have any randomness)
May I please know what the right way is to run inference, through MARL interface or directly load the policies via Raylib. And if render is the way to run inference, then how can I get consistent results that do not keep changing with the random seed.
Thanks, Arshad