-
Hi Edouard,
Thank you for your amazing contribution at the first place.
I am currently studying DQN network(image-input with convolutional network) and want to implement it to highway-env.
I have…
-
I want to implement a dueling double DQN algorithm for selecting multiple discrete actions. Since the existing dueling_ddqn_torch.py code is for choosing a single action, I should modify it. But when …
-
目前的场景动作空间为离散,所以需要上述两算法来跑,,看了PARL好像还没实现这两个? 能不能安排版本实现呢?谢谢!
-
DDPG, TD3, SAC, A2C, PPO, PPO doesn't support the crypto environment
DQN, DuelingDQN, DoubleDQN, and D3QN are working in the crypto environment but the rest doesn't work with the crypto environment, …
-
您好,如题,在CartPole中,使用SAC等算法会有RuntimeError: mat1 dim 1 must match mat2 dim 0
的错误,我想请问一下是有类似离散连续动作的参数来控制模型输出离散还是连续动作,还是需要自己修改模型呢
-
There is a paper on Towards Monocular Vision based Obstacle
Avoidance through Deep Reinforcement Learning, could you please tell me does this project come from this paper
-
RT,使用paddlepaddle跑D3QN的强化学习代码,三层的网络
1.请问如何更好的利用GPU?
2.请问为什么CPU占用率这么高?
环境:Ubuntu 18.04.5 LTS
我的paddlepaddle和parl版本号如下:
`paddlepaddle-gpu 2.0.1.post101`
`parl 1.3.1`
输入p…
-
Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and…
-
**Your code is really good. It deserves more stars.**
I try your code in other gym environment and it train faster than other Dueling DQN implement.
Besides, I'm confuse with the name of `ddqn.p…
-
### Bug report
Hello,
I tried to use argparse after import matplotlib.pyplot and this causes argparse not recognizing any arguments.
**Code for reproduction**
The python code i…