sawcordwell / pymdptoolbox

Markov Decision Process (MDP) Toolbox for Python
BSD 3-Clause "New" or "Revised" License
518 stars 252 forks source link

Fix division by zero in `ValueIteration._boundIter` #32

Open AdamGleave opened 4 years ago

AdamGleave commented 4 years ago

mdp.ValueIteration uses ValueIteration._boundIter to set self.max_iter for discounted MDPs. This is based on the span: the difference between the minimum and maximum values after one step of the Bellman operator.

The problem is for some MDPs the value can be identical for all states after a single application of the Bellman operator. This happens when $R(s,a)$ is equal to $Q(s,a)$ -- not common, but it does happen (e.g. any MDP with an all-zero reward is a degenerate example). In this case, span is 0 and max_iter becomes negative infinity.

If my understanding of the bound is correct, in this case we actually don't need to perform any iterations of value iteration -- the all-zero initialisation is OK. So I've added an explicit check that sets max_iter = 0 in this case avoiding the division by zero. I've also added a regression test for the all-zero MDP.

Some doctests are failing but they were failing before. All other tests are passing on my machine.