MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
The ValuePolicy documentation needs some work. In particular, it does not show how to construct the policy and the statement "The entry at stateindex(mdp, s) is the action that will be taken in state s." does not make sense for a ValuePolicy.
The
ValuePolicy
documentation needs some work. In particular, it does not show how to construct the policy and the statement "The entry atstateindex(mdp, s)
is the action that will be taken in states
." does not make sense for aValuePolicy
.