Fall 2016

Computing Probabilistic Bisimilarity Distances via Policy Iteration

Tuesday, Oct. 4, 2016 11:10 am11:50 am

Add to Calendar


Calvin Lab Auditorium

A transformation mapping a labelled Markov chain to a simple stochastic game is presented. In the resulting simple stochastic game, each vertex corresponds to a pair of states of the labelled Markov chain. The value of a vertex of the simple stochastic game is shown to be equal to the probabilistic bisimilarity distance, a notion due to Desharnais, Gupta, Jagadeesan and Panangaden, of the corresponding pair of states of the labelled Markov chain. Bacci, Bacci, Larsen and Mardare introduced an algorithm to compute the probabilistic bisimilarity distances for a labelled Markov chain. A modification of a basic version of their algorithm for a labelled Markov chain is shown to be the policy iteration algorithm applied to the corresponding simple stochastic game. Furthermore, it is shown that this algorithm takes exponential time in the worst case. Finally, experimental results confirm that it performs better in practice than other algorithms to compute the probabilistic bisimilarity distances. Joint work with Qiyi Tang.