Results 1381 - 1390 of 24094
We give a deterministic algorithm for approximately computing the fraction of Boolean assignments that satisfy a degree-2 polynomial threshold function. Given a degree-2 input polynomial p(x_1,...,x_n) and a parameter eps > 0, the algorithm approximates Pr[p(x)>=0] to within an additive \pm \eps in time poly(n,2^{\poly(1/\eps)}). (Note that it is NP-hard to determine whether the above probability is nonzero, so any sort of multiplicative approximation is almost certainly impossible even for efficient randomized algorithms.) This is the first deterministic algorithm for this counting problem in which the running time is a fixed polynomial in n for any constant eps > 0. For "regular" polynomials p (those in which no individual variable's influence is large compared to the sum of all n variable influences) our algorithm runs in poly(n,1/eps) time.
We give a new framework for proving the existence of low-degree, polynomial approximators for Boolean functions with respect to broad classes of non-product distributions. Our proofs do not rely on the dominant Fourier-based methods from computational learning theory. Instead, we derive our approximations using techniques related to the classical method of moments.
The main new application is a polynomial-time algorithm for agnostically learning any function of a constant number of halfspaces (i.e., depth-2 neural networks) with respect to any log-concave distribution on R^n (for any constant accuracy parameter). This result was not known even for the case of PAC learning the intersection of two halfspaces.
Additionally, we show that in the smoothed-analysis setting, the above results hold with respect to any distribution that obeys a sub-exponential tail bound (i.e., sub-exponential or sub-gaussian densities).
Joint work with Raghu Meka.
We show that no polynomial-sized linear programming relaxation can achieve better than a 1/2-approximation for MAX-CUT, a 7/8-approximation for MAX-3SAT, or a 3/4-approximation for MAX-2SAT.
This is accomplished by bringing together two formerly disparate lines of research. On the one hand, there has been a recent sequence of exciting lower bounds on the size of extended formulations for various polytopes that arise in combinatorial optimization. At the same time, researchers have extensively studied the power of specific LP hierarchies for approximating NP-hard problems. We show that for max-CSP problems, general polynomial-sized LPs are exactly as power as LPs arising from a constant number of rounds of the Sherali-Adams hierarchy.
Joint work with Siu On Chan, Prasad Raghavendra, and David Steurer.
A partially symmetric function is a boolean function that is invariant to the reordering of all but a constant number of its variables. The set of partially symmetric functions includes juntas, symmetric functions, and a number of other functions as well. They were first studied by Shannon (1949) in the context of circuit complexity and have since been studied (under various names) in many other areas of theoretical computer science as well. Most recently, partially symmetric functions appeared in the context of property testing, where they are conjectured to essentially characterize the set of functions for which isomorphism testing can be done with a constant number of queries.
In this talk, we will examine how tools from the analysis of boolean functions might be used to understand partially symmetric functions and resolve some fundamental open problems related to the se functions.
We give an O(1/epsilon)-query property testing algorithm which distinguishes whether an unknown set has surface area at most A or (is epsilon-far from) surface area at least (4/pi) A. Our result works under n-dimensional Lebesgue measure or n-dimensional Gaussian measure. Previous work only treated the 1-dimensional case.
We will survey various generalizations of the results of Kahn, Kalai, and Linial, and that of Friedgut (Friedgut's Junta theorem) to graphs other than the hypercube. Of particular interest will be their generalization to the setting of Cartesian product of arbitrary undirected graphs (or equivalently, reversible Markov chains) because of their applications to hardness of approximation. We will sketch a proof in this setting, extending the arguments of Rossignol, and Falik-Samorodnitsky.
This is joint work with Madhur Tulsiani.
Given two Gaussian vectors that are positively correlated, what is the probability that they both land in some fixed set A? Borell proved that this probability is maximized (over sets A with a given volume) when A is a half-space. We will give a new and simple proof of this fact, which also gives some stronger results. In particular, we can show that half-spaces uniquely maximize the probability above, and that sets which almost maximize this probability must be close to half-spaces. We will also mention some applications to testing, and to the analysis of the Goemans-Williamson algorithm.
A boolean predicate $f:\{0,1\}^k\to\{0,1\}$ is said to be {\em somewhat approximation resistant} if for some constant $\tau > \frac{|f^{-1}(1)|}{2^k}$, given a $\tau$-satisfiable instance of the $\maxkcsp(f)$ problem, it is NP-hard to find an assignment that {\it strictly beats} the naive algorithm that outputs a uniformly random assignment. Let $\tau(f)$ denote the supremum over all $\tau$ for which this holds. It is known that a predicate is somewhat approximation resistant precisely when its Fourier degree is at least $3$. For such predicates, we give a characterization of the {\it hardness gap} $(\tau(f) - \frac{|f^{-1}(1)|}{2^k})$ up to a factor of $O(k^5)$. We relate the hardness gap to the Fourier mass on coefficients of degree 3 or higher, giving hardness results when the Fourier mass is large and an SDP-based approximation algorithm for the case when the Fourier mass is small. We also give a similar characterization of the {\it integrality gap} for the natural SDP relaxation of $\maxkcsp(f)$ after $\Omega(n)$ rounds of the Lasserre hierarchy.
For a predicate $f:{-1,1}^k \mapsto {0,1}$ with $\rho(f) = \frac{|f^{-1}(1)|}{2^k}$, we call the predicate strongly approximation resistant if given a near-satisfiable instance of CSP$(f)$, it is computationally hard to find an assignment such that the fraction of constraints satisfied is outside the range $[\rho(f)-\Omega(1), \rho(f)+\Omega(1)]$.
We present a characterization of strongly approximation resistant predicates under the Unique Games Conjecture. We also present characterizations in the mixed linear and semi-definite programming hierarchy and the Sherali-Adams linear programming hierarchy. In the former case, the characterization coincides with the one based on UGC. Each of the two characterizations is in terms of existence of a probability measure on a natural convex polytope associated with the predicate.
The predicate is called approximation resistant if given a near-satisfiable instance of CSP$(f)$, it is computationally hard to find an assignment such that the fraction of constraints satisfied is at least $\rho(f)+\Omega(1)$. When the predicate is odd, i.e. $f(-z)=1-f(z),\forall z\in {-1,1}^k$, it is easily observed that the notion of approximation resistance coincides with that of strong approximation resistance. Hence for odd predicates, in all the above settings, our characterization of strong approximation resistance is also a characterization of approximation resistance.
Joint work with Subhash Khot and Pratik Worah.