-
I made a docker image (https://github.com/arvganesh/stable-retro-docker) that allowed me to get stable-retro + gymnasium running on my M1 Mac with display support.
-
Use the insights of https://github.com/ll7/understanding_deep_RL/blob/66a5e6943e4fd7466ad3d7638ae951c10cb8dcc2/wandb_tests/wandb_car_racing_sweep.py#L151-L181 to simplify https://github.com/ll7/robot_…
-
### Question
I want to add a graph to the observation space by
```
# Create NetworkX graph
G = nx.Graph()
# Add nodes (C-alpha atoms) with 320-dimensional zero embeddings
for _, ro…
-
Langkaer Gymnasium/HF/IB World School
-
### Describe the bug
When I try to create a flake for developing with RLlib i get the following error:
`ImportError: /nix/store/ri3jr13byjp43lcf5nrqsj9cn0mpmnyi-python3-3.12.5-env/lib/python3.12/s…
-
### What happened + What you expected to happen
I converted existing code working on 2.7 to 2.20 (new api)
The error:
File "/opt/project/trading/training/model/rl/multi_agent/ppo/equity/trainer…
-
### Description:
When I running the agent_train.py script, an error occurs that halts the execution. The issue appears to be related to checking the truth value of an array with more than one eleme…
-
**Describe the bug**
Cannot recover gymnasium environment with `eval_env=True`.
**Code example**
```
import minari
dataset = minari.load_dataset("mujoco/ant/expert-v0")
env = dataset.recover…
-
### Is your feature request related to a problem? Please describe
Configuring environments in **Jumanji** involves manually setting different parameters such as grid size, number of agents etc. This …
-
I propose we do a user guide for rlberry. The outline of which would be something like this:
* Installation
* Basic Usage
* Quick Start RL
* Quick Start Deep RL
* Set up of an experiment
…