mdp.ValueIteration uses ValueIteration._boundIter to set self.max_iter for discounted MDPs. This is based on the span: the difference between the minimum and maximum values after one step of the Bellman operator.
The problem is for some MDPs the value can be identical for all states after a single application of the Bellman operator. This happens when $R(s,a)$ is equal to $Q(s,a)$ -- not common, but it does happen (e.g. any MDP with an all-zero reward is a degenerate example). In this case, span is 0 and max_iter becomes negative infinity.
If my understanding of the bound is correct, in this case we actually don't need to perform any iterations of value iteration -- the all-zero initialisation is OK. So I've added an explicit check that sets max_iter = 0 in this case avoiding the division by zero. I've also added a regression test for the all-zero MDP.
Some doctests are failing but they were failing before. All other tests are passing on my machine.
mdp.ValueIteration
usesValueIteration._boundIter
to setself.max_iter
for discounted MDPs. This is based on the span: the difference between the minimum and maximum values after one step of the Bellman operator.The problem is for some MDPs the value can be identical for all states after a single application of the Bellman operator. This happens when $R(s,a)$ is equal to $Q(s,a)$ -- not common, but it does happen (e.g. any MDP with an all-zero reward is a degenerate example). In this case,
span
is0
andmax_iter
becomes negative infinity.If my understanding of the bound is correct, in this case we actually don't need to perform any iterations of value iteration -- the all-zero initialisation is OK. So I've added an explicit check that sets
max_iter = 0
in this case avoiding the division by zero. I've also added a regression test for the all-zero MDP.Some doctests are failing but they were failing before. All other tests are passing on my machine.