Talks
Fall 2016

Learning Probabilistic Models for Graph Partitioning in the Presence of Noise

Monday, November 14th, 2016 10:10 am10:50 am

Add to Calendar

Graph Partitioning problems (like Minimum Balanced Cut) form a central topic of study in the algorithms and machine learning research. However, most problems in this class are NP-hard in the worst case, and efficient algorithms with good worst-case guarantees (e.g. good approximation algorithms) have been elusive for many of these problems. Since real-world instances are unlikely to behave like worst-case instances, average-case models for graph partitioning and community detection have been widely studied to reason about most instances that occur in practice. The Stochastic Block Model or the Planted Partition Model is the most widely used probabilistic model in various fields, including machine learning, statistics, and social sciences. Many existing algorithmic approached like spectral methods successfully learn the ground-trut clustering (or communities) when the data is drawn exactly according to the model; but they do not work well in the presence of errors.

In this talk, I will address the following question: Can we design polynomial time algorithms for learning probabilistic models for graph partitioning, that are robust to modeling errors?

I will first survey different average-case models for graph partitioning like semi-random models that are more general than the stochastic block model, and algorithmic results for these models. Then, I will present a new polynomial time algorithm to recover the clustering (or communities) in a stochastic block model even in the presence of various modeling errors.  These algorithms can tolerate even a constant fraction of edges being corrupted adversarially.

This is based on joint works with Konstantin Makarychev and Yury Makarychev.