-
According to Keras 2 API need to change in `Brain` class
`output_dim` to `units`
`nb_epoch` to `epochs`
```
def _createModel(self):
model = Sequential()
model.add(Dense(uni…
-
Hi,
Finally got around to trying a real "deep learning" implementation against btgym and I've run up against a problem. I really don't know enough about openai gym to understand what the problem is…
-
In your A3C implementation, you create [shared target networks](https://github.com/osh/kerlym/blob/master/kerlym/a3c/a3c.py#L91) to be synchronized with the learned network every some time steps (to s…
-
1. how to add new game?
2. where is final result of trained agent?
3. How to switch dqn with ddqn d-ddqn?
I am all new in this So your help will be highly appreciated. Thanks, Looking forward.
-
Hi, spiglerg! When I run gym_ddpg.py, it raises an error:ImportError: No module named batch_norm_utils. It is obvious that batch_norm_utils.py is not included in the DQN_DDQN_Dueling_and_DDPG_Tensorfl…
-
Hi,
Although I can run other scripts, I get the following error when I attempt to run Seaquest-DDQN-PER.py (Using theano backend):
`Using Theano backend.
Using gpu device 0: GeForce GT 730M (CNMeM …
-
I run environment through python interface from doc/example like
$ python python_example.py path_to_rom path_to_core
I modified the code set episode to 2000 and the training was running for 1 day bu…
-
I don't seem to be able to get results mentioned in this repos README.md with the simple dqn over doomSimple with training from 10k episodes.
Please mention the hyperparameters used to obtain the re…
-
pong does not converge on default settings
I tried DQN & DDQN - ran both for 55,000 Episodes and both fail to converge
DQN - kerlym -e Pong-v0 -n simple_cnn -t 1 -P -f 200 -u 250 -o 0.5 -D 0 -a dqn …
-
Requires a "sum tree" binary heap for efficient execution.
**Edit 2016-06-02:** Please keep to the [contributing guidelines](https://github.com/Kaixhin/Atari/blob/master/CONTRIBUTING.md#using-the-iss…