Probabilistic model checking and Markov decision processes (MDPs) form two interlinked branches of formal analysis for systems operating under uncertainty. These techniques offer a mathematical ...
Quasi-open-loop policies consist of sequences of Markovian decision rules that are insensitive to one component of the state space. Given a semi-Markov decision process (SMDP), we distinguish between ...