-
I can run the code on PongNoFrameskip-v4 without problems:
`python main.py --env-name "PongNoFrameskip-v4" --algo ppo`
However when I run the code on CartPole-v0:
`python main.py --env-name "Cart…
-
I've tried to use TRPO to create a model for `CartPole-v0` by following the instructions on your [OpenAI Gym page](https://gym.openai.com/evaluations/eval_4QXCRAATTDqakJV0YZlJ4g#reproducibility), chan…
-
While trying the a3c example provided I encountered the following error:
```
Training model
Training ACAgentRunner...
[2017-04-10 16:30:50,699] Making new env: CartPole-v0
Training ACAgentRunne…
-
I code for hobby and want to learn about neuroevolution and came across your [tutorial](https://threads-iiith.quora.com/Neuro-Evolution-with-Flappy-Bird-Genetic-Evolution-on-Neural-Networks) in google…
ghost updated
6 years ago
-
**1 Describe the bug**
If we run gymnasium environments that do not belong to classic control, then it always throws an error. The errors for each environment are unique. This could be because differ…
-
I am trying to run the OpenAI gym example on a different environment than Pendulum-v0. However, I keep getting this error:
```
raise AttributeError('Cannot get attribute %s from %s' % (item, self…
-
When I run the command (to create dataset)
`for alpha in {0.0,1.0}; do python3 scripts/create_dataset.py --save_dir=$HOME/tmp/ --load_dir=$HOME/tmp/CartPole-v0 --env_name=cartpole--num_trajectory=4…
-
python main.py --scenario classic_CartPole-v0 --algo dqn就各种报错
-
# Code
```python
seki@seki-VirtualBox:~/src/learn2deeplearn/learnRL/CartPole$ cat cp_random.py
#!/usr/bin/python2.7
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
…
-
Compared to "CartPole-v0" environment, one of the most common gym environment for reinforcement learning, "PCSE-v0" leads to monotonically increasing memory usage that ends up killing the kernel. Both…