Abstract
Many problems in machine learning that involve discrete structures or subset selection may be phrased in the language of submodular set functions. The property of submodularity, also referred to as a 'discrete analog of convexity', expresses the notion of diminishing marginal returns, and captures combinatorial versions of rank and dependence. Submodular functions occur in a variety of areas including graph theory, information theory, combinatorial optimization, stochastic processes and game theory. In machine learning, they emerge as the potential functions of graphical models, as utility functions in active learning and sensing, in models of diversity, in structured sparse estimation, matrix approximations or network inference. The lectures will give an introduction to the theory of submodular functions, example applications in machine learning, algorithms for submodular optimization, and current research directions.