-
Got a problem with RLlib, while training with a custom environment.
This uses a [simple env](https://github.com/DerwenAI/gym_projectile) where the [action space](https://github.com/DerwenAI/gym_proje…
-
Doesn't PPO, at least the vanilla variant, only work on-policy? That is, from recent data, not an experience replay?
-
Hi,
I was trying to reach you, i saw your result on the the repository CARLA PPO [agent,](https://github.com/bitsauce/Carla-ppo/tree/sub-policy),
I need your help in fixing some error in the code …
-
### What happened + What you expected to happen
I am trying to run the basic example of PPO utilizing a Remote Ray cluster. The cluster is running the nightly build from April 26th since there were…
-
### What happened + What you expected to happen
After training Multi Agent PPO with new New API Stack under the guidance of [ how-to-use-the-new-api-stack](https://docs.ray.io/en/latest/rllib/rllib-n…
-
### System Info
..
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own …
-
Hi.
I used the following command in order get the related results in the paper:
Run Experiments.
`python launch.py -alg ppo -curiosity_alg rnd -env jamesbond -lstm -sample_mode gpu -num_gpus …
-
hi there, i train the model with TRL - ppo following https://github.com/huggingface/trl/blob/main/examples/scripts/ppo.py
with the accelerate config:
deepspeed_zero3.yaml:
```
compute_environmen…
-
Traceback (most recent call last):
File "D:\PycharmProjects\HighwayEnv-master\scripts\sb3_highway_ppo_transformer.py", line 406, in
env.viewer.set_agent_display(
AttributeError: 'NoneType' object …
-
I'm running the run_vectorized.py script from the example folder on a different test case locally. The script executes successfully for a number of iterations, but then throws a 'simulation failed' er…