MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
Currently collect(stepthrough(...)) returns a Vector{Any}. It might be nice to return something more concrete. This involves implementing eltype(::(PO)MDPSimIterator).
This should be pretty straightforward for states, actions, observations, reward, and time. It is a bit more complex for beliefs and info. It might require something like belief_type(updater, pomdp), which would not be too hard to make.
Currently
collect(stepthrough(...))
returns aVector{Any}
. It might be nice to return something more concrete. This involves implementingeltype(::(PO)MDPSimIterator)
.This should be pretty straightforward for states, actions, observations, reward, and time. It is a bit more complex for beliefs and info. It might require something like
belief_type(updater, pomdp)
, which would not be too hard to make.