higgsfield / RL-Adventure

Pytorch Implementation of DQN / DDQN / Prioritized replay/ noisy networks/ distributional values/ Rainbow/ hierarchical RL
2.99k stars 587 forks source link

DQN example: target DQN == behavior DQN (bug? or by design?) #32

Open gordicaleksa opened 3 years ago

gordicaleksa commented 3 years ago

Hi!

Did you make these 2 the same on purpose? Following the "Algorithm 1" from the original arxiv 2013 paper?

They initially stated that we should freeze the DQN and use it as the target net (because of stability), but later in "Algorithm 1" they (probably by mistake) used the same theta params for both nets.

Ethan00Si commented 3 years ago

I think it's a bug. You can get more information from this issue.