-
**Describe the bug**
`envs.action_space.sample()` is not reproducible after seeds.
**Code example**
```python
import gym
import numpy as np
def make_env(env_id, seed):
def thunk():
…
-
When I try : pip3 install -U 'mujoco-py=1.50.1'
The following error is coming up...
Using legacy 'setup.py install' for mujoco-py, since package 'wheel' is not installed.
Installing collected pac…
-
Dear authors, after checking the code, I found that in https://github.com/semitable/lb-foraging/blob/master/lbforaging/foraging/environment.py#L125
```python
field_x = self.field.sh…
-
## Issue summary
Hello, 😄
I want to **import Atari Space Invaders rom to retro.**
My rom name is "SpaceInvaders-Atari2600.a26" the same name as presented in the list of possible roms playable i…
-
### Problem
When running `rewex01_test01.yaml`, The output of the Energym simulations suggest that the simulations runs for an entire year before the agent takes the first action. Is this possible?…
rdnfn updated
2 years ago
-
In the [Swimmer environment](https://github.com/openai/gym/blob/master/gym/envs/mujoco/assets/swimmer.xml), there are 5 joints. However, the `step` function removes the positions of the first two join…
-
Thanks for creating this easy to use environment for urban scenarios.
I wanted to use this environment for multi agent learning. Currently, single agent learning is supported. Are there any plans for…
-
Right now, Gym has a GoalEnv class and Env class as base classes in core.py. The GoalEnv class was added as part of the robotics environments, and impose special requirements on the observation space.…
-
### Search before asking
- [X] I searched the [issues](https://github.com/ray-project/ray/issues) and found no similar issues.
### Ray Component
RLlib
### What happened + What you expected to hap…
-
I am using Ubuntu 16.04, on latest version of Anaconda with all packages updated (e.g. Numpy 1.11 and cmake 3.3.1)
It has following errors when running pip install gym[all]
```
-- The C compiler i…