JuliaPOMDP / POMDPs.jl

MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
http://juliapomdp.github.io/POMDPs.jl/latest/
Other
662 stars 100 forks source link

eltype for stepthrough #558

Open zsunberg opened 2 months ago

zsunberg commented 2 months ago

Currently collect(stepthrough(...)) returns a Vector{Any}. It might be nice to return something more concrete. This involves implementing eltype(::(PO)MDPSimIterator).

This should be pretty straightforward for states, actions, observations, reward, and time. It is a bit more complex for beliefs and info. It might require something like belief_type(updater, pomdp), which would not be too hard to make.