-
Training is slow and may be pending because the GPU may not be enabled. Could this be a problem due to excessive memory requirements?
Here is the code.
```
config = (
PPOConfig().env…
-
If you are submitting a bug report, please fill in the following details and use the tag [bug].
**Describe the bug**
The minigrid.wrappers.The FlatObsWrapper class in the observation method concat…
-
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [16 lines of output]
/root/…
-
### Description
I am trying to figure out the proper versions for dependencies of Ray RLLib. I tried reading through the Ray documentation and the Ray source code, but I couldn't find the relevant in…
-
### What happened + What you expected to happen
The new API stack for RLlib seems to have challenges with observation wrappers, which are quite handy for action masking models. Unlike #44452, it is n…
-
### What happened + What you expected to happen
The example script **self_play_league_based_with_open_spiel.py** found [**here**](https://github.com/ray-project/ray/blob/master/rllib/examples/multi_a…
-
Hello,
I would like to use or-gym supply chain environments for my project. I am trying to learn the environments now.
While following the "Using Ray and DFO to optimize a multi-echelon supply cha…
-
I try to train this network according the command "python ppo_single_large_hiar.py train"。
**But It makes a mistake as followings:**
Failure # 1 (occurred at 2022-12-29_17-59-27)
Traceback (mos…
-
Release test **rllib_learning_tests_pong_ppo_torch.aws** failed. See https://buildkite.com/ray-project/release/builds/16725#018fe1f2-a6ac-4002-b08b-6d5c34f87e40 for more details.
Managed by OSS Test …
-
## Bug Description
When using the visualizer_rllib.py, there is an error thrown by rllib.py in the function get_flow_params, when the net_module is being defined. The error occurs because netwo…