Tuesday, December 15th, 2015

9:30 am10:20 am

I will present a new proof of Cheeger's inequality, which can be generalized to incorporate robust vertex expansion in it. The proof builds on the ideas of Lovasz and Kannan for analyzing mixing times of random walks. I will also discuss similar analyses for different local graph partitioning algorithms (random walks, pagerank, evolving sets). Finally, I will present an example showing the limitations of local graph partitioning algorithms in attacking the small-set expansion hypothesis, disproving a conjecture by Oveis Gharan about evolving sets.

Joint work with Siu On Chan, Tsz Chiu Kwok, and Yin Tat Lee.

11:00 am11:50 am
We show that a random d-regular bipartite graph is an optimal (i.e., Ramanujan) expander with nonzero probability. Notably, we use tools inspired by asymptotic (i.e., large n limit) random matrix theory to prove statements about finite dimensional matrices. The mediating role is be played by the expected characteristic polynomials of the random matrices in question, exploiting in particular their real-rootedness, interlacing, and invariance properties.
 
Joint work with Adam Marcus and Daniel Spielman.
2:30 pm3:20 pm
Speaker: Yin Tat Lee, MIT
I will present the first almost-linear time algorithm for constructing linear-sized spectral sparsification for graphs. This improves all previous constructions of linear-sized spectral sparsification, which at least n^2 requires time. A key ingredient in our algorithm is a novel combination of two techniques used in literature for constructing spectral sparsification: Random sampling by effective resistance, and adaptive constructions based on barrier functions.
3:50 pm4:40 pm
We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates.  Our hierarchy theorem says that for every $d \geq 2$, there is an explicit $n$-variable Boolean function $f$, computed by a linear-size depth-$d$ formula, which is such that any depth-$(d-1)$ circuit that agrees with $f$ on $(1/2 + o_n(1))$ fraction of all inputs must have size $\exp(n^{\Omega(1/d)}).$  This answers an open question posed by Håstad in his Ph.D. thesis.
 
Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of Håstad, Cai, and Babai.  We also use our result to show that there is no "approximate converse" to the results of Linial, Mansour, Nisan and Boppana on the total influence of constant-depth circuits, thus answering a question posed by O'Donnell, Kalai, and Hatami.
 
A key ingredient in our proof is a notion of random projections which generalize random restrictions.
 
Joint work with Ben Rossman and Rocco Servedio. 

Wednesday, December 16th, 2015

9:30 am10:20 am

In this talk I will discuss faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector of a matrix A^T A.  In particular I will show how the classic method of shift-and-invert preconditioning efficiently reduces the top eigenvector problem to the problem of solving a sequence of linear systems. Moreover, by leveraging recent sampling-based linear system solvers I will show how this allows us to obtain improved algorithms for both the offline eigenvector estimation problem where A is given explicitly as well as the online eigenvector estimation problem where we only receive independent samples from A^T A. 

For the offline eigenvector problem, I will show how to compute an ϵ-approximate top eigenvector in time Õ([nnz(A) + d*sr(A)/gap^2]*log 1/ϵ) and Õ([nnz(A)^{3/4} (d*sr(A))^{1/4} / √gap ]*log 1/ϵ). Here sr(A) is the stable rank, gap is the multiplicative eigengap, and Õ hides log factors in d and gap. This result improves upon the running time of both classic approaches as well as recent work on subspaces embeddings and stochastic optimization in various parameter regimes 
 
For the online eigenvector problem, I will show how to refine a gap-approximate solution to an ϵ-approximation using O(v log log (1/ϵ) /gap^2 + v /(gap * ϵ)) samples.  Here v is a natural notion of variance of the distribution. Combining with previous work to compute gap-approximate solutions this result improves upon sample complexity and runtime results under a variety of assumptions on D.
 
This talk reflects joint work with Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli.
 
11:00 am11:50 am
We study the problem of computing the largest root of a real rooted polynomial p(x) to within error \eps given only black box access to it, i.e., for any x \in \R, the algorithm can query an oracle for the value of p(x), but it is not allowed access to the coefficients of p(x). A folklore result for this problem is that the largest root of a polynomial of degree n can be computed in n \log (1/\eps) polynomial queries using the Newton iteration. We give a simple algorithm that queries the oracle at only 2^{O(\sqrt{\log n})} \log(1/\eps) points. It is based on accelerating the Newton method by using higher derivatives. As a special case, we consider the problem of computing the top eigenvalue of a symmetric matrix to within error \eps in time polynomial in the input description. Well-known methods such as the power iteration and Lanczos iteration incur running time polynomial in 1/\eps, while Gaussian elimination takes more than n^4 bit operations. As a corollary of our main result, we obtain a nearly \n^\omega bit-complexity algorithm for the top eigenvalue.
 
This is joint work with Anand Louis.
2:30 pm3:20 pm
In this work we show that the integrality gap of the famous Held-Karp LP relaxation for ATSP is bounded by loglog(n)O(1) which entails a polynomial time loglog(n)O(1)-estimation algorithm; that is we provide a polynomial time algorithm that finds the cost of the best possible solution within a loglog(n)O(1) factor, but does not provide a solution with that cost.

We prove this by making progress on Goddyn's Thin Tree conjecture; we show that every k-edge-connected graph contains a loglog(n)O(1)/k-thin tree. To tackle the Thin Tree conjecture, we build upon the recent resolution of the Kadison-Singer problem by Marcus, Spielman, and Srivastava. We answer the following question by providing sufficient conditions: Given a set of rank 1 quadratic forms, can we select a subset of them from a given collection of subsets, whose total sum is bounded by a fraction of the sum of all quadratic forms?
 
Finally we address the problem of designing polynomial time approximation algorithms, algorithms that also output a solution, matching the guarantee of the estimation algorithm. We prove that this entirely relies on finding a polynomial time algorithm for our extension of the Kadison-Singer problem. Namely we prove that ATSP can be log(n)c-approximated in polynomial time for any c>0 and that it can be loglog(n)O(1)-approximated in quasi-polynomial time, assuming access to an oracle which solves our extension of Kadison-Singer.
3:50 pm4:40 pm

We study the widely used spectral clustering algorithms, i.e. partition a graph into k clusters via (1) embedding the vertices of a graph into a low-dimensional space using the bottom eigenvectors of the Laplacian matrix, and (2) partitioning the embedded points via k-means algorithms. We show that, for a wide class of graphs, spectral clustering algorithms give a good approximation of the optimal clustering. While this approach was proposed in the early 1990s and has comprehensive applications, prior to our work similar results were known only for graphs generated from stochastic models.

We also give a nearly-linear time algorithm for partitioning well-clustered graphs, which is based on heat kernel embeddings and approximate nearest neighbor data structures.

Thursday, December 17th, 2015

9:00 am9:50 am

We consider nonnegative functions on abelian groups that are sparse with respect to the Fourier basis. We establish combinatorial conditions on subsets S and T of Fourier basis elements under which nonnegative functions with Fourier support S are sums of squares of functions with Fourier support T. Our combinatorial condition involves constructing a chordal cover of an associated Cayley Graph. Our result relies on two main ingredients: the decomposition of sparse positive semidefinite matrices with a chordal sparsity pattern, as well as a simple but key observation exploiting the structure of the Fourier basis elements of G. 

We apply our general result to two examples. First, in the case where G = Z_2^n, we prove that any nonnegative quadratic form in n binary variables is a sum of squares of functions of degree at most ⌈n/2⌉, establishing a conjecture of Laurent. Second, we consider nonnegative functions of degree d on ℤN (when d divides N). By constructing a particular chordal cover of the d-th power of the N-cycle, we prove that any such function is a sum of squares of functions with at most 3dlog(N/d) nonzero Fourier coefficients. Dually this shows that a certain cyclic polytope in R^(2d) with N vertices has a semidefinite representation of size 3dlog(N/d). Putting N=d^2 gives a family of polytopes with LP extension complexity Ω(d^2) and SDP extension complexity O(dlog(d)). To the best of our knowledge, this is the first explicit family of polytopes whose semidefinite extension complexity is provably smaller than their linear programming extension complexity.

Joint work with Hamza Fawzi and James Saunderson. Preprint available at http://arxiv.org/abs/1503.01207

10:20 am11:10 am
We introduce a method for proving lower bounds on the efficacy of semidefinite programming (SDP) relaxations for combinatorial problems. In particular, we show that the cut, TSP, and stable set polytopes on n-vertex graphs are not the linear image of the feasible region of any SDP (i.e., any spectrahedron) of dimension less than 2^{n^c}, for some constant c>0. This result yields the first super-polynomial lower bounds on the semidefinite extension complexity of any explicit family of polytopes. 
Our results follow from a general technique for proving lower bounds on the positive semidefinite rank of a matrix. To this end, we establish a close connection between arbitrary SDPs and those arising from the sum-of-squares SDP hierarchy. For approximating maximum constraint satisfaction problems, we prove that SDPs of polynomial-size are equivalent in power to those arising from degree-O(1) sum-of-squares relaxations. This result implies, for instance, that no family of polynomial-size SDP relaxations can achieve better than a 7/8-approximation for MAX-3-SAT.
 
This is joint work with James Lee (U. Washington) & David Steurer (Cornell Univ).
11:40 am12:30 pm

Recently, there has been some success in using the sum-of-squares hierarchy to solve classical problems in machine learning. However, the sum-of-squares algorithms are very slow, making them impractical for the machine learning setting. In this talk I will show that ideas from sum-of-squares rounding schemes can be used to obtain fast, spectral algorithms. I will focus on the problem of decomposing an overcomplete low-rank random tensor into rank-1 components, showing that ideas from the quasi-polynomial sum-of-squares algorithm can be adapted to achieve similar performance in subquadratic time.

Based on joint work with Sam Hopkins, Jonathan Shi and David Steurer.

2:30 pm3:20 pm

This talk is concerned with clustering random graphs, under "block-model" assumptions, an area that has made remarkable progress recently. In the first part of the talk, I will describe an empirical comparison of the existing recovery theorems for the Degree-Corrected Stochastic Block Model (DC-SBM). The past few years have seen a number of new recovery algorithms with guarantees for this model. Since these represent sufficient conditions, one expects that cases exist which are not covered by the theory, but for which recovery is possible. We explore experimentally the limits of the current theory. We generate benchmark cases, with well defined parameters controlling their difficulty. Then we verify which of the existing results in the literature predict recovery. What we observe suggests that there is more ground to be covered towards deriving sharp thresholds for this problem. The second part of the talk will present a new class of block models recoverable by spectral clustering, that contains the DC-SBM as a special case. This class has O(n^2) free parameters (instead of the DC-SBM's O(n)), yet one can easily obtain the same recovery guarantees known for the DC-SBM, under conditions that expose with more clarity the parameters that control recovery.

Joint work with Yali Wan.

3:50 pm4:40 pm

In this talk we will consider finding approximate isoperimetric partitions of a probability distributions given a sample from the distributions.  We first consider the case where we are explicitly given the distribution and discuss numerical methods to obtain Cheeger like cuts by reducing the problem to eigenvectors of a graph Laplacian.  Second, we look at known  unbiased estimators for estimating the PDF from a sample.  We will show how the second nearest neighbor estimators can be combine with mesh generation methods to construct a graph Laplacian from a sample.  Finally we will show some experimental results on partitioning flow cytometry data related to HIV.

Friday, December 18th, 2015

9:30 am10:20 am

We study the satisfiability of ordering constraint satisfaction problems (CSPs) above average. We prove the conjecture of Gutin, van Iersel, Mnich, and Yeo that the satisfiability above average of ordering CSPs of arity k is fixed-parameter tractable for every k. Previously, this was only known for k=2 and k=3. We also generalize this result to more general classes of CSPs, including CSPs with predicates defined by linear inequalities.

To obtain our results, we prove a new Bonami-type inequality for the Efron-Stein decomposition. The inequality applies to functions defined on arbitrary product probability spaces. In contrast to other variants of the Bonami Inequality, it does not depend on the mass of the smallest atom in the probability space. We believe that this inequality is of independent interest. 

This talk is based on a joint work with Konstantin Makarychev and Yuan Zhou.

11:00 am11:50 am

In the talk, I will describe the Correlation Clustering problem and give an overview of known results. I will present a new 2.06 approximation algorithm for the problem on complete graphs. Then, I will describe a semi-random model for the Correlation Clustering problem with noisy partial information, and give PTAS for semi-random instances.

Based on joint works with Shuchi Chawla, Tselil Schramm, and Grigory Yaroslavtsev; and with Yury Makarychev and Aravindan Vijayaraghavan.

11:50 am12:40 pm

Inspired by computational abilities of the organism slime mold, several researchers have recently proposed dynamical systems aimed at solving fundamental problems such as min-cost flow and, more generally, linear programming. In this talk I will present these dynamics from an optimization viewpoint and show that they provably work. Together, these results shed some light on how nature might be solving instances of perhaps the most complex problem in P: linear programming.

This is based on joint works with Damian Straszak (EPFL).