Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 1391 - 1400 of 24094

Workshop Talk
|
Aug. 26, 2013

Approximation Algorithms For Projection Games

The projection games problem is the following problem in combinatorial optimization:

Given a bipartite graph G=(A,B,E), sets of labels Sigma_A, Sigma_B to the vertices, and functions pi_e:Sigma_A->Sigma_B on the edges ("projections"), the goal is to find a labeling of the vertices f_A:A-->Sigma_A, f_B:B-->Sigma_B that satisfies as many of the projections as possible, i.e., for as many e=(a,b) in E as possible pi_e(f_A(a))=f_B(b).

The projection games problem is known to be very hard to approximate: even if there exists a labeling that satisfies all edges, it is NP-hard to find a labeling that satisfies epsilon=o(1) fraction of the edges.
This theorem ("the strong PCP theorem") is the starting point of most optimal NP-hardness of approximation results. In this talk we'll discuss new approximation algorithms for projection games.

This is joint work with Pasin Manurangsi.

Workshop Talk
|
Aug. 26, 2013

Hardness of Maximum Independent Set in Structured Hypergraphs

This talk shall focus on the problem of finding large independent sets in hypergraphs with certain useful structural properties. In particular, we shall describe tight inapproximability for independent set in: (i) almost 2-colorable 3-uniform hypergraphs and, (ii) k-uniform hypergraphs with a blocked-partiteness property–a generalization of the k-partiteness property studied in the work of Bansal and Khot [BK10]. Both the above results were known earlier [GS11, BK10] only assuming the Unique Games Conjecture, and we shall describe the Fourier analytic techniques used to obtain them unconditionally. The talk shall also sketch the applications of the second result which yield tight inapproximability for two well known scheduling problems–Concurrent Open Shop and the Assembly Line Problem.
Workshop Talk
|
Aug. 26, 2013

An Analytic Approach to Parallel Repetition

Workshop Talk
|
Aug. 26, 2013

Approximation Resistance from Pairwise Independent Subgroups

We show optimal (up to a constant factor) NP-hardness for maximum constraint satisfaction problem with k variables per constraint (Max-k-CSP), whenever k is larger than the domain size. This follows from our main result concerning CSPs given by a predicate: a CSP is approximation resistant if its predicate contains a subgroup that is balanced pairwise independent. Our main result is analogous to Austrin and Mossel’s, bypassing their Unique-Games Conjecture assumption whenever the predicate is an abelian subgroup.

Our main ingredient is a new gap-amplification technique inspired by XOR-lemmas. Using this technique, we also improve the NP-hardness of approximating Independent-Set on bounded-degree graphs, Almost-Coloring, Two-Prover-One-Round-Game, and various other problems.

Workshop Talk
|
Sept. 6, 2013

Theory and Big Data

There are two way randomness enters analysis of Big Data: either the input data or the algorithm may "toss coins" (or even both). The former involves stochastic models of data. The idea in theory is to get provable algorithms which work in the worst case under such models. The talk will recap mixture models, their use in clustering, planted dense graphs, and other problems and algorithms that work under the model. Coin tossing by the algorithm will be touched upon depending on the coverage of this topic in other lectures.

Workshop Talk
|
Sept. 6, 2013

Some Geometric Perspectives on Combinatorics: High-Dimensional, Local and Local-to-Global II

Why is it that graphs play such a key role in a essentially all applications of mathematics to real-life problems? The main reason is that there are so many situations in natural sciences/engineering/social sciences and more where we are trying to understand a complex system that consists of interacting pairs. These can be molecules in some biological system or two companies engaged in a business transaction etc. But what do you do when the underlying interactions may involve more than two parties? We are currently much less prepared to deal with such problems. It is natural to introduce into the game higher-dimensional simplicial complexes (a graph is a one-dimensional simplicial complex). This leads us into a fascinating set of problems and challenges, some of which already resolved and many still widely open. As it turns out graphs are just one example where a basic combinatorial object is inherently one-dimensional and where higher-dimensional counterparts suggest themselves. I will introduce recent work on high-dimensional permutations and tournaments.

These lectures are based on joint papers with several coauthors, e.g.

  • http://arxiv.org/abs/1010.1400
  • http://arxiv.org/abs/0903.1359
  • http://arxiv.org/abs/1106.0649
  • http://arxiv.org/abs/1108.5042
  • http://arxiv.org/abs/1203.3312
  • http://arxiv.org/abs/1208.4218
  • http://arxiv.org/abs/1302.1684
  • http://arxiv.org/abs/1307.2684

​The first session of this talk will take place on Friday, September 6 from 11:00 am – 12:30 pm.

Workshop Talk
|
Sept. 6, 2013

Some Geometric Perspectives on Combinatorics: High-Dimensional, Local and Local-to-Global I

Why is it that graphs play such a key role in a essentially all applications of mathematics to real-life problems? The main reason is that there are so many situations in natural sciences/engineering/social sciences and more where we are trying to understand a complex system that consists of interacting pairs. These can be molecules in some biological system or two companies engaged in a business transaction etc. But what do you do when the underlying interactions may involve more than two parties? We are currently much less prepared to deal with such problems. It is natural to introduce into the game higher-dimensional simplicial complexes (a graph is a one-dimensional simplicial complex). This leads us into a fascinating set of problems and challenges, some of which already resolved and many still widely open. As it turns out graphs are just one example where a basic combinatorial object is inherently one-dimensional and where higher-dimensional counterparts suggest themselves. I will introduce recent work on high-dimensional permutations and tournaments.

These lectures are based on joint papers with several coauthors, e.g.

  • http://arxiv.org/abs/1010.1400
  • http://arxiv.org/abs/0903.1359
  • http://arxiv.org/abs/1106.0649
  • http://arxiv.org/abs/1108.5042
  • http://arxiv.org/abs/1203.3312
  • http://arxiv.org/abs/1208.4218
  • http://arxiv.org/abs/1302.1684
  • http://arxiv.org/abs/1307.2684

​The second session of this talk will take place on Friday, September 6 from 2:00 pm – 3:30 pm.

Workshop Talk
|
Sept. 6, 2013

Streaming, Sketching and Sufficient Statistics II

One natural way to deal with the challenge of Big Data is to make the data smaller. That is, to seek a compact (sublinear) representation of the data so that certain properties are (approximately) preserved. We can think of these as a generalization of sufficient statistics for properties of the data. The area of "streaming algorithms" seeks algorithms which can build such a summary as information is processed incrementally. An important class of streaming algorithms are sketches: carefully designed random projections of the input data that can be computed efficiently under the constraints of the streaming model. These have the attractive property that they can be easily computed in parallel over partitions of the input. They aim at optimizing a variety of properties: the size of the summary; the time required to compute the summary; the number of 'true' random bits required; and the accuracy guarantees that result.

This tutorial will present some of the powerful results that have emerged from these efforts:

  • sketches for sparse recovery over high dimensional inputs, with applications to compressed sensing and machine learning;
  • effective sketches for Euclidean space, providing fast instantiations of the Johnson-Lindenstrauss Lemma;
  • techniques for sampling from the support of a vector, and estimating the size of the support set;
  • applications of these ideas to large graph computations and linear algebra;
  • lower bounds on what is possible to summarize, arising from Communication Complexity and Information Complexity.

​The first session of this talk will take place on Thursday, September 5 from 4:00 pm – 5:30 pm.

Workshop Talk
|
Sept. 5, 2013

Streaming, Sketching and Sufficient Statistics I

One natural way to deal with the challenge of Big Data is to make the data smaller. That is, to seek a compact (sublinear) representation of the data so that certain properties are (approximately) preserved. We can think of these as a generalization of sufficient statistics for properties of the data. The area of "streaming algorithms" seeks algorithms which can build such a summary as information is processed incrementally. An important class of streaming algorithms are sketches: carefully designed random projections of the input data that can be computed efficiently under the constraints of the streaming model. These have the attractive property that they can be easily computed in parallel over partitions of the input. They aim at optimizing a variety of properties: the size of the summary; the time required to compute the summary; the number of 'true' random bits required; and the accuracy guarantees that result.

This tutorial will present some of the powerful results that have emerged from these efforts:

  • sketches for sparse recovery over high dimensional inputs, with applications to compressed sensing and machine learning;
  • effective sketches for Euclidean space, providing fast instantiations of the Johnson-Lindenstrauss Lemma;
  • techniques for sampling from the support of a vector, and estimating the size of the support set;
  • applications of these ideas to large graph computations and linear algebra;
  • lower bounds on what is possible to summarize, arising from Communication Complexity and Information Complexity.

​The second session of this talk will take place on Friday, September 6 from 9:00 am – 10:30 am.

Workshop Talk
|
Sept. 5, 2013

High-Dimensional Statistics II

The area of high-dimensional statistics concerns problems in which the ambient dimension is of the same order or substantially larger than the sample size. Although its roots are classical (dating back to Kolmogorov), it has become the focus of increasing attention in the modern era of big data. In this tutorial lecture, we survey some recent progress in different areas, including sparse estimation, covariance estimation, low-rank matrix recovery, graphical model selection, and non-parametric regression, with emphasis on the techniques used to obtain non-asymptotic results.

The first session of this talk will take place on Thursday, September 5 from 11:00 am – 12:30 pm.

Pagination

  • Previous page Previous
  • Page 138
  • Page 139
  • Current page 140
  • Page 141
  • Page 142
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage