JuliaPOMDP / POMDPs.jl

MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
http://juliapomdp.github.io/POMDPs.jl/latest/
Other
664 stars 100 forks source link

Typo in documentation #511

Closed slwu89 closed 1 year ago

slwu89 commented 1 year ago

In this section (http://juliapomdp.github.io/POMDPs.jl/latest/concepts/#POMDPs-and-MDPs), the docs state that "Z is the agent's observation space, and O defines the probability of receiving each observation at a transition".

However in this section (http://juliapomdp.github.io/POMDPs.jl/latest/def_pomdp/#State,-action-and-observation-spaces) it says "The state, action and observation spaces (S, A, and O)" and later on "The transition and observation keyword arguments are used to define the transition distribution, T, and observation distribution, Z, respectively."

So it seems like the docs need to consistently select one symbol to represent the observation space and another for the function that takes in an action, state, and observation and returns a probability.

lassepe commented 1 year ago

Thank you for letting us know. That was indeed a typo (O is the agent's observation space, Z is the observation probability). Now fixed.