-
when I installed pytorch-rl on py3.6, I only have ddpg and ddqn.
-
![dqn](https://user-images.githubusercontent.com/23042512/45259824-d8441780-b38a-11e8-94f9-6391923aa2f7.png)
![ddqn](https://user-images.githubusercontent.com/23042512/45259828-e5610680-b38a-11e8-988…
-
Hello, sorry to bother you. I admire your paper and respect your contributions in the field of autonomous driving. I have a question, what should I do if I want to get results on Dueling DDQN? I would…
-
Dear Sir,
Is our FQE evaluation method support .d3 file model?
When I'm going to run my trained DDQN .d3 file model on FQE, an error occurred saying "RuntimeError: Invalid magic number; corrupt file…
-
以下是 repo 中 DDQN 的实现,可以看到target NN 在计算下一个状态的 next Q value的时候,使用的 action 并不是用`self.model`得到的,而是直接用 target NN 在下一个状态时最大的价值的动作,这种实现方式是基本的target network + DQN 而不是真正的 DDQN
```python
class DoubleDQN:
…
-
Hello.
I am running your code in the atari game Breakout-v0.
the settings are simple DQN(nips), DQN(nature), DDQN, dueling DQN, dueling DDQN.
now, each processes are running almost 6M(6,000,000)…
-
New [paper](http://arxiv.org/pdf/1606.01868v1.pdf) with method that performs well on Montezuma's revenge. Implementation could be used with both DDQN ER and async A3C. The probability used for the pse…
-
I am thinking of using sheeprl as the base for my RL experiments! My work usually builds off of DQN-type algorithms: in increasing level of complexity, off of DDQN, Rainbow, or R2D2. Having some of th…
-
I am trying to adapt your code to train the agent to play breakout. I tried to use both the CartPole-basic file as well as the Seaquest-DDQN-PER file but the agent doesn't seem to learn after training…
-
Thank you so much for this great project.
When i try to run ddqn_rl_trader.py on windows (my computer has no GPU, so i use LSTM instead of CuDNNLSTM), i get the following errors:
2019-01-17 17:0…