-
I have a vectorized environment with multiple instances of CryptoEnv. I want to use the RandomUniformScaleReward wrapper to scale the rewards by a random factor sampled from a uniform distribution. I …
-
[OpenAI Gym](https://gym.openai.com/) provides several environments to demonstrate the capabilities of RL in different problems. Deepbots goal is to demonstrate capabilities of RL in a 3D, high fideli…
-
### Question
I am currently trying to train an agent from Stable Baselines 3 on the Mountain Car Continuous environment. I wish to increase the number of `max_episode_steps` (which is set to 999 by…
-
I've installed the required repository, but it still won't load CarCaring-v0:
pip install gym
pip install box2d
pip install box2d-kengz
pip install pyglet
from gym import envs
print(envs.regis…
-
When I run `env.render()` the rendering fails, a black pop-up window appears, and an warning is reported:
Your graphics drivers do not support OpenGL 2.0.
You may experience rendering issues or …
-
### What happened + What you expected to happen
I am trying to solve MountainCar-v0 with ray tune.
I get the following error:
```
ERROR serialization.py:371 -- _generator_ctor() takes from 0 t…
-
## Bug description
Attempting to load SB3 models from Huggingface in `serialize.py` often raises a `FileExistsError`, that tells us "Outdated policy format: we do not support restoring normalization …
-
### 🐛 Bug
The documentation of DQN agent (https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) specifies that log_interval parameter is "The number of timesteps before logging". How…
-
### 🐛 Bug
The current calculation of the `n_warmup_steps` is to divide `n_evaluations` by 3, which does not really make sense.
```python
elif pruner_method == "median":
pruner = MedianPru…
-