princewen / tensorflow_practice

tensorflow实战练习,包括强化学习、推荐系统、nlp等
6.72k stars 3.28k forks source link

你给的AC代码有一个错误! #28

Open clicdl opened 5 years ago

clicdl commented 5 years ago

代码中你说用的 td_error 的 actor-critic 算法,但实际算actor的gradient时,你用的是q而不是td_error, 修改如下

def learn(self, s, a, r, s_):

    s, s_ = s[np.newaxis, :], s_[np.newaxis, :]
    next_a = [[i] for i in range(N_A)]
    s_ = np.tile(s_,[N_A,1])
    q_ = self.sess.run(self.td_error, {self.s: s_,self.a:next_a})
    q_ = np.max(q_,axis=0,keepdims=True)
    q, _ = self.sess.run([self.q, self.train_op],
                                {self.s: s, self.q_: q_, self.r: r,self.a:[[a]]})
    return q
HannH commented 5 years ago

代码中你说用的 td_error 的 actor-critic 算法,但实际算actor的gradient时,你用的是q而不是td_error, 修改如下

def learn(self, s, a, r, s_):

    s, s_ = s[np.newaxis, :], s_[np.newaxis, :]
    next_a = [[i] for i in range(N_A)]
    s_ = np.tile(s_,[N_A,1])
    q_ = self.sess.run(self.td_error, {self.s: s_,self.a:next_a})
    q_ = np.max(q_,axis=0,keepdims=True)
    q, _ = self.sess.run([self.q, self.train_op],
                                {self.s: s, self.q_: q_, self.r: r,self.a:[[a]]})
    return q

直接看源码吧,源码就是td_error,这个作者改来改去的反而把东西改错了,而且还把源码声明删了。 https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow/blob/967c829335fa34a329b7976b29fc1f579776d67f/contents/8_Actor_Critic_Advantage/AC_CartPole.py#L74