yrlu / irl-imitation

Implementation of Inverse Reinforcement Learning (IRL) algorithms in Python/Tensorflow. Deep MaxEnt, MaxEnt, LPIRL
570 stars 146 forks source link

Possible bug: value iteration #3

Closed magnusja closed 6 years ago

magnusja commented 6 years ago

Hey there,

I found another issue. Value iteration is defined like this: image See: http://ufal.mff.cuni.cz/~straka/courses/npfl114/2016/sutton-bookdraft2016sep.pdf

Your code:

for s in range(N_STATES):
      v_s = []
      values[s] = max([sum([P_a[s, s1, a]*(rewards[s] + gamma*values_tmp[s1]) for s1 in range(N_STATES)]) for a in range(N_ACTIONS)])

https://github.com/stormmax/irl-imitation/blob/master/mdp/value_iteration.py#L42

So you are using reward of current state s and add it to the discounted value of the next state s1. How I understand the formular you should be doing:

for s in range(N_STATES):
      v_s = []
      values[s] = max([sum([P_a[s, s1, a]*(rewards[s1] + gamma*values_tmp[s1]) for s1 in range(N_STATES)]) for a in range(N_ACTIONS)])
mmcelikok commented 6 years ago

Hi, the value iteration is correct. It should be R(s) not R(s1). That is unfortunately a common confusion due to the Control Theory notation. In Sutton&Barto's equation, R{t+1} doesn't mean the reward of S{t+1} but the reward of S_t, and the reason is in control theory you observe a reward of the state at the next time step. So, at time step t let's say you arrive at s5, then St = s5. But you observe the reward of s5 at the next time step, not instantaneously. So R{t+1} = R(s5). In value iteration, the value of a state is updated as reward of that state + max(value of all possible next states) averaged over the transition probabilities of course.

That being said, there are other problems in the code such as the computation of stochastic policy in value_iteration. He/she uses a simpler scheme than Ziebart. For more check Ziebart's thesis (came after the MaxENT paper, 2010). It's under Algorithms section, called "State log partition function".

magnusja commented 6 years ago

Hey, thanks for the explanation! I noticed the difference between the paper and this implementation. It seems that for this little toy example this still works quite well.