MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
As noted here: https://github.com/JuliaPOMDP/POMDPs.jl/discussions/546#discussioncomment-9283721 ValuePolicy does not have an updater method. It seems like it should just be a DiscreteUpdater. We should also consider the appropriate action method for ValuePolicy with a POMDP, which, I suppose, should just return the QMDP action.
As noted here: https://github.com/JuliaPOMDP/POMDPs.jl/discussions/546#discussioncomment-9283721 ValuePolicy does not have an
updater
method. It seems like it should just be aDiscreteUpdater
. We should also consider the appropriateaction
method forValuePolicy
with a POMDP, which, I suppose, should just return the QMDP action.