-
Hi,
I'm currently working with the `PyFlyt/QuadX-Ball-In-Cup-v2` environment using `gymnasium` and have set the `render_mode` to 'rgb_array'. The environment currently returns a first-person view f…
-
- [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] RL algorithm bug
+ [ ] documentation request (i.e. "X is missing from the documentation.")
+ [ ] ne…
-
I get the following errors when I run FinRL Paper trading demo. Please can you help?
TypeError Traceback (most recent call last)
[](https://localhost:8080/#) in ()…
-
Hey,
As you may know, I'm the maintainer of OpenAI gym. Your changes are interesting, and I was hoping you could elaborate on why you made them. It seems like some of these should be upstreamed to …
-
-
## To Reproduce
```python
from torchrl.envs import EnvCreator, ParallelEnv
from torchrl.envs.libs.gym import GymEnv
def run(from_pixels):
env = ParallelEnv(
2, EnvCreator(lambd…
-
### 🐛 Bug
I tested different implementations of the PPO algorithm and found some discrepancies among the implementations. I tested each implementation on 56 Atari environments, with five trials per…
-
The ability to set [random seed](https://github.com/jurgisp/memory-maze/blob/main/memory_maze/tasks.py#L66) was recently implemented in dm_env. It needs to be added to gym wrapper too.
-
### What happened + What you expected to happen
Training TD3/DDPG doesn't seem to respect the action bounds, specifically the lower action bound. Specifically, it seems like the action outputs are …
-
I get the following error while executing the code given in `experiments/ppo_4x4grid.py`. The console output is as follows:
```
2024-06-05 15:05:36,177 INFO worker.py:1743 -- Started a local Ray i…