-
hey. wonderful work sir.
i have read your report also. you mentioned about dqn and Double DQN. if you don't mind can i also look into those implementation? just want to see clear picture of the loo…
-
Traceback (most recent call last):
File "main.py", line 120, in
main()
File "main.py", line 117, in main
atari_learn(env, task.env_id, num_timesteps=task.max_timesteps, double_dqn=dou…
-
Hi,
I just stuck with exception which doesn't allow me to make my learning correct.
```
org.nd4j.linalg.exception.ND4JIllegalStateException: X, Y and Z arguments should have the same length for Pai…
-
Hi,godka
Thanks for sharing your DQN-ABR project.I have some question to ask you.
I'm a newbie in RL and ABR field.I tried to train and test the DQN-based ABR algorithm using this project.I test…
-
Dear Authors,
First of all, I am very thankful for your repository. I got confused about the correctness of implementation in one part. For `soft_dqn.py`, the variable `valid_v_target_next` is get…
-
Hi, I'm trying to run DQN with asynchronous sampling using rlpyt's async sampler and runner classes. However, it looks like they don't work with CPU only, and require the presence of a GPU. Here's my …
-
-
hello,I have not find where the “two-nets structure” is in your simulator and I want to change the
code to double dqn but have no idea how to do。could you discuss that with me?
-
Hi @DanielTakeshi ,
I am facing the same issue where the vanilla DQN and the PDD DQN agents are not learning as expected on BreakoutNoFrameskip-v4.
I copied over the hyper parameters and the expl…
-
When i do :
`python main.py --is_train=False --display=True --use_gpu=False`
I get :
`
[*] GPU : 1.0000
[2018-05-23 17:17:55,692] Making new env: Breakout-v0
{'_save_step': 500000,
'_test_ste…