MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
So it seems like the docs need to consistently select one symbol to represent the observation space and another for the function that takes in an action, state, and observation and returns a probability.
In this section (http://juliapomdp.github.io/POMDPs.jl/latest/concepts/#POMDPs-and-MDPs), the docs state that "Z is the agent's observation space, and O defines the probability of receiving each observation at a transition".
However in this section (http://juliapomdp.github.io/POMDPs.jl/latest/def_pomdp/#State,-action-and-observation-spaces) it says "The state, action and observation spaces (S, A, and O)" and later on "The transition and observation keyword arguments are used to define the transition distribution, T, and observation distribution, Z, respectively."
So it seems like the docs need to consistently select one symbol to represent the observation space and another for the function that takes in an action, state, and observation and returns a probability.