Results 1391 - 1400 of 24094
The projection games problem is the following problem in combinatorial optimization:
Given a bipartite graph G=(A,B,E), sets of labels Sigma_A, Sigma_B to the vertices, and functions pi_e:Sigma_A->Sigma_B on the edges ("projections"), the goal is to find a labeling of the vertices f_A:A-->Sigma_A, f_B:B-->Sigma_B that satisfies as many of the projections as possible, i.e., for as many e=(a,b) in E as possible pi_e(f_A(a))=f_B(b).
The projection games problem is known to be very hard to approximate: even if there exists a labeling that satisfies all edges, it is NP-hard to find a labeling that satisfies epsilon=o(1) fraction of the edges.
This theorem ("the strong PCP theorem") is the starting point of most optimal NP-hardness of approximation results. In this talk we'll discuss new approximation algorithms for projection games.
This is joint work with Pasin Manurangsi.
We show optimal (up to a constant factor) NP-hardness for maximum constraint satisfaction problem with k variables per constraint (Max-k-CSP), whenever k is larger than the domain size. This follows from our main result concerning CSPs given by a predicate: a CSP is approximation resistant if its predicate contains a subgroup that is balanced pairwise independent. Our main result is analogous to Austrin and Mossel’s, bypassing their Unique-Games Conjecture assumption whenever the predicate is an abelian subgroup.
Our main ingredient is a new gap-amplification technique inspired by XOR-lemmas. Using this technique, we also improve the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Two-Prover-One-Round-Game, and various other problems.
There are two way randomness enters analysis of Big Data: either the input data or the algorithm may "toss coins" (or even both). The former involves stochastic models of data. The idea in theory is to get provable algorithms which work in the worst case under such models. The talk will recap mixture models, their use in clustering, planted dense graphs, and other problems and algorithms that work under the model. Coin tossing by the algorithm will be touched upon depending on the coverage of this topic in other lectures.
Why is it that graphs play such a key role in a essentially all applications of mathematics to real-life problems? The main reason is that there are so many situations in natural sciences/engineering/social sciences and more where we are trying to understand a complex system that consists of interacting pairs. These can be molecules in some biological system or two companies engaged in a business transaction etc. But what do you do when the underlying interactions may involve more than two parties? We are currently much less prepared to deal with such problems. It is natural to introduce into the game higher-dimensional simplicial complexes (a graph is a one-dimensional simplicial complex). This leads us into a fascinating set of problems and challenges, some of which already resolved and many still widely open. As it turns out graphs are just one example where a basic combinatorial object is inherently one-dimensional and where higher-dimensional counterparts suggest themselves. I will introduce recent work on high-dimensional permutations and tournaments.
These lectures are based on joint papers with several coauthors, e.g.
- http://arxiv.org/abs/1010.1400
- http://arxiv.org/abs/0903.1359
- http://arxiv.org/abs/1106.0649
- http://arxiv.org/abs/1108.5042
- http://arxiv.org/abs/1203.3312
- http://arxiv.org/abs/1208.4218
- http://arxiv.org/abs/1302.1684
- http://arxiv.org/abs/1307.2684
The first session of this talk will take place on Friday, September 6 from 11:00 am – 12:30 pm.
Why is it that graphs play such a key role in a essentially all applications of mathematics to real-life problems? The main reason is that there are so many situations in natural sciences/engineering/social sciences and more where we are trying to understand a complex system that consists of interacting pairs. These can be molecules in some biological system or two companies engaged in a business transaction etc. But what do you do when the underlying interactions may involve more than two parties? We are currently much less prepared to deal with such problems. It is natural to introduce into the game higher-dimensional simplicial complexes (a graph is a one-dimensional simplicial complex). This leads us into a fascinating set of problems and challenges, some of which already resolved and many still widely open. As it turns out graphs are just one example where a basic combinatorial object is inherently one-dimensional and where higher-dimensional counterparts suggest themselves. I will introduce recent work on high-dimensional permutations and tournaments.
These lectures are based on joint papers with several coauthors, e.g.
- http://arxiv.org/abs/1010.1400
- http://arxiv.org/abs/0903.1359
- http://arxiv.org/abs/1106.0649
- http://arxiv.org/abs/1108.5042
- http://arxiv.org/abs/1203.3312
- http://arxiv.org/abs/1208.4218
- http://arxiv.org/abs/1302.1684
- http://arxiv.org/abs/1307.2684
The second session of this talk will take place on Friday, September 6 from 2:00 pm – 3:30 pm.
One natural way to deal with the challenge of Big Data is to make the data smaller. That is, to seek a compact (sublinear) representation of the data so that certain properties are (approximately) preserved. We can think of these as a generalization of sufficient statistics for properties of the data. The area of "streaming algorithms" seeks algorithms which can build such a summary as information is processed incrementally. An important class of streaming algorithms are sketches: carefully designed random projections of the input data that can be computed efficiently under the constraints of the streaming model. These have the attractive property that they can be easily computed in parallel over partitions of the input. They aim at optimizing a variety of properties: the size of the summary; the time required to compute the summary; the number of 'true' random bits required; and the accuracy guarantees that result.
This tutorial will present some of the powerful results that have emerged from these efforts:
- sketches for sparse recovery over high dimensional inputs, with applications to compressed sensing and machine learning;
- effective sketches for Euclidean space, providing fast instantiations of the Johnson-Lindenstrauss Lemma;
- techniques for sampling from the support of a vector, and estimating the size of the support set;
- applications of these ideas to large graph computations and linear algebra;
- lower bounds on what is possible to summarize, arising from Communication Complexity and Information Complexity.
The first session of this talk will take place on Thursday, September 5 from 4:00 pm – 5:30 pm.
One natural way to deal with the challenge of Big Data is to make the data smaller. That is, to seek a compact (sublinear) representation of the data so that certain properties are (approximately) preserved. We can think of these as a generalization of sufficient statistics for properties of the data. The area of "streaming algorithms" seeks algorithms which can build such a summary as information is processed incrementally. An important class of streaming algorithms are sketches: carefully designed random projections of the input data that can be computed efficiently under the constraints of the streaming model. These have the attractive property that they can be easily computed in parallel over partitions of the input. They aim at optimizing a variety of properties: the size of the summary; the time required to compute the summary; the number of 'true' random bits required; and the accuracy guarantees that result.
This tutorial will present some of the powerful results that have emerged from these efforts:
- sketches for sparse recovery over high dimensional inputs, with applications to compressed sensing and machine learning;
- effective sketches for Euclidean space, providing fast instantiations of the Johnson-Lindenstrauss Lemma;
- techniques for sampling from the support of a vector, and estimating the size of the support set;
- applications of these ideas to large graph computations and linear algebra;
- lower bounds on what is possible to summarize, arising from Communication Complexity and Information Complexity.
The second session of this talk will take place on Friday, September 6 from 9:00 am – 10:30 am.
The area of high-dimensional statistics concerns problems in which the ambient dimension is of the same order or substantially larger than the sample size. Although its roots are classical (dating back to Kolmogorov), it has become the focus of increasing attention in the modern era of big data. In this tutorial lecture, we survey some recent progress in different areas, including sparse estimation, covariance estimation, low-rank matrix recovery, graphical model selection, and non-parametric regression, with emphasis on the techniques used to obtain non-asymptotic results.
The first session of this talk will take place on Thursday, September 5 from 11:00 am – 12:30 pm.