-
### 🐛 Bug
SBX becomes much slower than SB3 when the number of cpus are limited
### To Reproduce
Steps to reproduce the behavior.
```python
'''
For installation please do -
pip install g…
-
-
Hi, looks like a stunning library
I am trying to run a project on windows, but I am receiving a compile error:
Command:
cl /LD /LD /std:c++17 /O2 /arch:AVX2 /fp:fast -DTINYRL_USE_PYTHON_ENVI…
-
**Issue Description**
I am encountering an error while running the code provided in the official CityLearn documentation. I have not modified a single line of the code, and I'm using the exact code s…
-
### 🐛 Bug
Hi,
When I try to run TQC hyperparameter optimization with multiple jobs (n-jobs>1) with a GPU (this also happens with multiple CPU cores and n-jobs=1), it gives me this error:
```
…
-
## Environment
- Grid2op version: `1.9.5`
- lightsim version: `0.7.5 `
- gym: `0.21.0`
- gymnasium: `0.28.1`
- stable-baselines3: `2.0.0`
- System: `ubuntu20.04`
- Grid2Op environment: `…
-
Hello, I wonder where is the code for sheeprl to save videos (both training videos and testing videos). I noticed that, after training sheeprl (`exp=DreamerV3`), I have:
```
...
├── test_videos
│ …
-
### 🐛 Bug
I am trying to design my PPO based on the PPO from sb3.
As for the GAE part, in my opinion, if the task is done with failure after action $a_t$, such as crash, the TD target should be $r_…
-
### ❓ Question
Hello,
I have implemented a custom Vectorized Environment using Mujoco (which adheres to stable baseline 3's VecEnv standard), but I haven't found any evidence of RL Zoo 3 supporting …
-
### 🐛 Bug
In the method `stable_baselines3.common.on_policy_algorithm.OnPolicyAlgorithm.learn` the `iteration` value is not updated in the `locals` dictionary while using callbacks.
### To Reprod…