-
### Proposal
This is a fairly loose proposal for a feature that imo could be very useful, but it doesn't have to be done anytime soon.
Currently, gym uses a rather messy stateful OOP approach, …
-
Hello, i would like to know if its possible to create multiple instances of the gym-rle env, something like in this code:
```python
import gym
import gym_rle
envs = [gym.make('MortalKombat-v0')…
abcp4 updated
7 years ago
-
I'm running the Data Collection scenario, and I've set the WANDB syncing to offline.
The commands I ran to start data collection:
```bash
export WANDB_MODE=offline
bash run/data_collect.sh /home…
-
On executing trpo_continous.py, I get the following error:
> [2017-07-01 23:52:58,375] Making new env: CartPole-v0
> [TL] InputLayer continous_shared/continous_input_layer: (?, 3)
> [TL…
-
Candidates are:
gym.wrappers.RecordEpisodeStatistics
gym.wrappers.ClipAction
gym.wrappers.NormalizeObservation
gym.wrappers.TransformObservation
gym.wrappers.NormalizeReward
gym.wrappers.Trans…
-
**Submitting author:** @MathisFederico (Mathis Federico)
**Repository:** https://github.com/IRLL/HierarchyCraft
**Branch with paper.md** (empty if default branch):
**Version:** v1.2.4
**Editor:** @lo…
-
Hello
Right after installation of the mtenv, when importing into python (for instance running 'from mtenv import make'), the following error appears:
Traceback (most recent call last):
File "…
-
Greetings,
I wanted to see some demo running and how the system runs and tried to run `python test.py` without any inference from a model (fixing joystick = [0, 0, 1, 0, 0]).
At first, it seemed w…
-
Is there a plan to use this library for reinforcement learning ?
-
Does anybody had this issue :
unable to create atari enivronment in rllab3 environment (in macosx):
env = GymEnv('Pong-v0')
or
env = gym.make('Pong-v0')
Error :
Referenced from: /Users/james/ana…