-
Hi!
Did you make these 2 the same on purpose? Following the "Algorithm 1" from the original [arxiv 2013 paper](https://arxiv.org/abs/1312.5602)?
They initially stated that we should freeze the D…
-
https://github.com/A-Noctua/DQN-Sandbox
Deep Q Learning application in robotics
http://rll.berkeley.edu/deeplearningrobotics/index.html
-
One central element of the Atari DQN is the use of 4 consecutive frames as input making the state more Markov, ie. having the vital dynamic movement information. This paper http://arxiv.org/abs/1507.0…
-
while training sac, the following error occured
``` Traceback (most recent call last):
File "/home/astik/double_pendulum/examples/reinforcement_learning/SAC/train_sac_noisy_env.py", line 357, in …
-
see `dqn.zip` below
1. corrected code from `Deep-Recurrent-Q-Network.ipynb`
`zip` now is a class, so need to convert to `list`:
`episodeBuffer = list(zip(bufferArray))`
1. corrected `helper.…
-
Hi I followed all the steps mentioned in the turtlebot3 manual and I get this error whenever I try to launch dqn_stage3 and dqn_stage4. I am noob at ro2, please help me fix this.
System details:
U…
-
当使用demo_DQN_Dueling_Double_DQN 训练结束的的pt文件无法作为测试时的权重文件 ,是否需要将保存pt文件
由torch.save(actor, actor_path)
更改为torch.save(actor.state_dict(), actor_path)
-
Hello, I have read your article carefully and carried out the reproduction work. In the process of code reproduction, I met the following problems. I hope you can reply me.
1. When I trained with low…
-
The paper 'Evolving RL Algorithms' (https://arxiv.org/abs/2101.03958) uses evolution strategies to find new modifications of DQN. The paper reports the two best found algorithms DQN Clipped and DQN Re…
-
Create init env of airsim