Closed bramhaag closed 2 years ago
Hi Bram,
Some quick remarks:
In general, this looks like a setting for which Baier et al, see https://doi.org/10.1007/978-3-662-54580-5_16 provides an intricate solution.
However, the description of the query sounds like something that is significantly easier. Can you (potentially via email) contact us with some more information?
Best, Sebastian
Indeed, conditional rewards are only supported for DTMC. For MDP, Storm can only compute conditional probabilities.
This is not exactly what you intended to do, but maybe the following types of measures are interesting for you as well (both are supported by Storm):
'Pmax=? [F<=100 "finished"]'
quantile(M,Pmax>=0.9 [F<=M "finished"])
Thank you for the examples, @tquatmann. For my purposes, those two properties are sufficient.
I have an MDP which does not always reach the goal state. In the cases where it does reach the goal state, I am interested in finding the minimum amount of transitions needed.
I have assigned rewards to the transitions of interest, and used the following property to find the minimum reward needed:
Regardless of which engine I select, I get the following output:
Am I doing something wrong, or is this feature simply not supported?