-
### What happened + What you expected to happen
Running PPO with LSTM and a custom tokenizer (set via a catalog).
This results in the following error:
```
File "ppo_lstm_encoder_sample.py", line…
-
Hello
Thank you for sharing file
I run
python inference.py
Traceback (most recent call last):
File "/home/osboxes/Desktop/Traffic-Signal-Control/inference.py", line 13, in
from ray.rlli…
-
## Bug description
I was testing out the imitation learning library with a custom gym environment and ran into a shortcoming in imitation/util/util.py. I get the error message provided below.
`…
-
We want to support ppoaf as a training engine in addition to rllib.
Should be done after #275 and #391
-
When trying to run the library using
```
python train.py --experiment minigrid-a2c-all --env MiniGrid-FourRooms-v0
```
the I am confronted with this error:
```
ImportError: cannot import name…
-
### Description
Currently, no information sharing among agents is available, out of sharing the same policy and/or having centralized information among them. As the number of algorithms in MARL explo…
-
CI test **linux://rllib:learning_tests_multi_agent_cartpole_appo_gpu** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/5169#01905ba9-2c2c-4ff0-ba8e-c17a10a43739
- ht…
-
## System information
- Grid2op version: `1.8.1`
- l2rpn-baselines version: `0.6.0.post1`
- System: `mac osx, ubuntu16.04, ...`
- Baseline concerned: `PPO_RLLIB`
## Bug description
When I …
-
Hello. Thank you very much for sharing your models
After training, the trained model is stored in ray_results.
1 ) So, please tell me how to implement the trained model.
2 ) Each vehicle is ass…
-
Hello pvbrowser community
Pvbrowser compile and works perfectly under my system, but I have not succes making rllib standalone working with python.
I've tried the build_python_interface script with …