-
**Description**
From the discussion here #756, callbacks in DQN should have access to local variables, e.g. `done` through `BaseCallback.locals` / `self.locals`. However, this isn't the case.
**C…
m-rph updated
4 years ago
-
# 10.4 랜덤 행동 에이전트의 환경 실행 결과 확인
env = wrap_env(gym.make('MountainCar-v0'))
env.reset()
score = 0
step = 0
while True:
action = env.action_space.sample()
obs, reward, done, info = env.s…
jinhj updated
4 years ago
-
I was working through the gym tutorial https://gym.openai.com/docs/. But I ran into an error after executing env.render() function. This function executed once properly, but I am not able to execute o…
-
On Python 3.7.6 (using Thonny IDE) I get the following error when importing `pyglet.gl`, which is actually being imported by `arcade`.
OS: Ubuntu 19.04
``` python
>>> import pyglet.gl
Traceb…
-
Please merge the README in https://gist.github.com/nish21/760cbdafcbb2838f7707e1edea6a1709
into master, so we have a single source of reference.
Also please add the set of benchmarking environment…
-
```
[06-28 11:42:33 MainThread @logger.py:224] Argv: D:/git/parl/xx.py
WARNING: OMP_NUM_THREADS set to 2, not 1.
The computation speed will not be optimized if you use data parallel.
It will fai…
-
### What is the problem?
Ray version: 0.7.5
Gym version: 0.17.0
Python version: 3.6.10
TensorFlow version: 1.10.0
OS: macOS Mojave 10.14.6
I used MountainCarContinuous-v0 as a customized e…
-
Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and…
-
Hi, I was trying the following code from README `python train.py --algo ppo --env MountainCar-v0 -n 50000 -optimize --n-trials 1000 --n-jobs 2 \
--sampler random --pruner median`, but I got the fol…
-
# Short Version:
## Expected Behaviour
`env.render(mode='rgb_array', close=True)` returns a numpy array containing the raw pixel representation of the current state.
## Actual Behaviour
The ca…