huawei-noah / SMARTS

Scalable Multi-Agent RL Training School for Autonomous Driving
MIT License
954 stars 190 forks source link

Update `ray.rllib` to 2.5 #2067

Closed Gamenot closed 1 year ago

Gamenot commented 1 year ago

Looks good to me. I don't have strong opinions about which approach to use. I would probably vote for option 3 if I had to choose.

@saulfield OK, I will pursue this option.

I think I will still keep around option 2 but as a different primitive example.

rllib
\ tune_pbt_pg_example.py # Approach 3
\ pg_example.py # Approach 2
Gamenot commented 1 year ago

I have most things working, the main issue left is that it appears like the parallel environments are not getting unique worker_index and vector_index which we originally used for environment diversity within a trial.