Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 1861 - 1870 of 23883

Workshop Talk
|
July 1, 2025

Monotonicity Testing of High-Dimensional Distributions with Subcube Conditioning

We study monotonicity testing of high-dimensional distributions on {−1,1}^n in the model of subcube conditioning, suggested and studied by Canonne, Ron, and Servedio [CRS15] and Bhattacharyya and Chakraborty [BC18]. Previous work shows that the sample complexity of monotonicity testing must be exponential in n (Rubinfeld, Vasilian~\cite{RV20}, and Aliakbarpour, Gouleakis, Peebles, Rubinfeld, Yodpinyanee~\cite{AGPRY19}). We show that the subcube \emph{query complexity} is Θ̃ (n/ε2), by proving nearly matching upper and lower bounds. Our work is the first to use directed isoperimetric inequalities (developed for function monotonicity testing) for analyzing a distribution testing algorithm. Along the way, we generalize an inequality of Khot, Minzer, and Safra~\cite{KMS18} to real-valued functions on {−1,1}n. We also study uniformity testing of distributions that are promised to be monotone, a problem introduced by Rubinfeld, Servedio~\cite{RS09} , using subcube conditioning. We show that the query complexity is Θ̃ (n‾√/ε2). Our work proves the lower bound, which matches (up to poly-logarithmic factors) the uniformity testing upper bound for general distributions (Canonne, Chen, Kamath, Levi, Waingarten~\cite{CCKLW21}). Hence, we show that monotonicity does not help, beyond logarithmic factors, in testing uniformity of distributions with subcube conditional queries.

Workshop Talk
|
July 1, 2025

Vizing's Theorem in Near-Linear Time

In his classic result, Vizing (1964) proved that any graph of maximum degree ∆ can be edge colored using at most ∆+1 different colors. Vizing’s original proof is algorithmic and shows that such an edge coloring can be found in O(mn) time where m and n denote the number of edges and vertices respectively. This was subsequently improved to O~(m \sqrt{n}) independently by [Arjomandi ’82; Gabow et al. ’85]. This bound remained state-of-the-art for four decades, and only recently got improved to O~(min{n^2, mn^{1/4}}) [Assadi ’24; Bhattacharya et al. ’24] using randomization. In this talk, I will present a completely new approach to edge coloring that leads to the first near-linear—in fact O(m log ∆)—time algorithm for Vizing’s theorem.

Contact

Disability Access Coordinator
simonsevents@berkeley.edu
People

Nipun Amarasinghe

Nipun Amarasinghe is currently a second year mathematics graduate student at Texas A&M University advised by Dr. J. M. Landsberg. Nipun's research interests are in algebraic geometry and its applications to the complexity of matrix multiplication and more...

People

Hanna Komlós

Hanna Komlós is a PhD student at New York University advised by Martín Farach-Colton. Her research is primarily in data structures, particularly randomized and external-memory data structures and fundamental combinatorial primitives with data-structural...

People

Cliff Stein

Clifford Stein is the Wei T. Chang Professor of Industrial Engineering and Operations Research and of Computer Science at Columbia University. He is a fellow of the ACM and the former Director of the Data Science Institute at Columbia University. He...

Workshop Talk
|
June 30, 2025

Talk by

No abstract available.

Workshop Talk
|
June 30, 2025

Sublinear Algorithms for Estimating Single-Linkage Clustering Costs

Single-linkage clustering is a fundamental method for data analysis. A single-linkage clustering is obtained by iteratively merging the closest pair of clusters, where initially every data point forms its own cluster and the distance between clusters is defined by their closest pair of points. Algorithmically, one can compute a single linkage $k$-clustering (a partition into $k$ clusters) by computing a minimum spanning tree and dropping the $k-1$ most costly edges. The resulting forest spans the clusters. This motivates to define the cost $\mathrm{cost}_k$ of a single linkage $k$-clustering by the weight of the corresponding spanning forest. If we consider single linkage clustering as computing a hierarchy of clusterings, the cost of the hierarchy is defined as the sum of the individual clusterings. We assume that the distances between data points are given as a graph $G$ with average degree $d$ and edge weights from $\{1,\dots, W\}$. If there is no edge, we assume the distance to be infinite. We give a sampling based algorithm that is given access to the adjacency list representation of $G$ and that computes in $\tilde O(d\sqrt{W}/\varepsilon^3)$ time a succinct representation of estimates $\widehat{\mathrm{cost}}_k$ of the cost of every $k$-clustering such that $\sum_{i=1}^n |\widehat{\mathrm{cost}} - \mathrm{cost}_k| \le \varepsilon \mathrm{cost}(G)$, where $\mathrm{cost}(G) = \sum_{i=1}^n \mathrm{cost}_k$ is the cost of the hierarchical clustering and $1>\varepsilon >0$. Thus we can approximate the cost of every $k$-clustering upto an absolute error that \emph{on average} is a $(1+\varepsilon)$-approximation of the true cost. In particular, our result also implies that we can estimate $\mathrm{cost}(G)$ upto a factor of $1\pm \varepsilon$ in the same running time. We extend our results to the setting when edges represent similarities and the clusters are defined by a maximum spanning tree. In this setting, the running time of our algorithm is $\tilde O(dW/\varepsilon^3)$. We show that our algorithm are almost optimal by giving near matching lower bounds for estimating $\cost(G)$. We conduct extensive experiments that validate our theoretical findings. Joint work with Christian Sohler and Yi Xu.

Workshop Talk
|
June 30, 2025

Distribution learning with advice (Remote Talk)

Given i.i.d. samples from an unknown distribution $P$, the goal of distribution learning is to recover the parameters of a distribution that is close to $P$. We revisit this problem when the learner is given as advice the parameters of a distribution $Q$. Suppose $P$ is from the class of balanced product distributions over the n-dimensional Boolean hypercube. We show that there is an efficient algorithm to learn $P$ within TV distance $\epsilon$ with the sample complexity being $\tilde{O}(n^{1-c\eta}/\eps^2)$ for a constant $c>0$, if $\|\bm{p} - \bm{q}\|_1<\eps d^{(1-\eta)/2}$. Here, $\bm{p}$ and $\bm{q}$ are the mean vectors of $P$ and $Q$ respectively, and $\eta$ is unknown a priori. Without advice, it is known that the sample complexity must be linear in n. We show similar results for learning high-dimensional gaussians also.

Workshop Talk
|
June 30, 2025

Student Talks

Yiqiao Bao

Title: Nearly Tight Bounds on Testing of Metric Properties

Abstract: Given a non-negative $n \times n$ matrix viewed as a set of distances between $n$ points, we consider the property testing problem of deciding if it is a metric. For general metrics, our paper is the first to consider these questions. We prove an upper bound of $O(n^{2/3}/\epsilon^{4/3})$ on the query complexity for this problem.  Our algorithm is simple, but the analysis requires great care in bounding the variance on the number of violating triangles in a sample. When $\epsilon$ is a slowly decreasing function of $n$ (rather than a constant, as is standard), we prove a lower bound of matching dependence on $n$ of $\Omega (n^{2/3})$, ruling out any property testers with $o(n^{2/3})$ query complexity unless their dependence on $1/\epsilon$ is super-polynomial.

Based on joint work with Sampath Kannan and Erik Waingarten.

Amir Azarmehr

Title: Stochastic Matching via In-n-Out Local Computation Algorithms

Abstract: This talk is based on joint work with Soheil Behnezhad, Alma Ghafari, and Ronitt Rubinfeld.

Consider the following stochastic matching problem. We are given a known graph G = (V, E). An unknown subgraph G_p = (V, E_p) is realized where E_p includes every edge of E independently with some probability p ∈ (0, 1]. The goal is to query a sparse subgraph H of G, such that the realized edges in H include an approximate maximum matching of G_p.

We show that a poly(1/p)-degree subgraph can obtain a (1 − ε)-approximation for any desirably small fixed ε > 0, unifying two long lines of work that achieve O(1)-approximations using poly(1/p) degree, and (1 − ε)-approximation using quasipoly(1/p) degree. Beyond its quantitative improvement, a key conceptual contribution of our work is to connect local computation algorithms (LCAs) to the stochastic matching problem for the first time. While prior work on LCAs mainly focuses on their out-queries (the number of vertices probed to produce the output of a given vertex), our analysis also bounds the in-queries (the number of vertices that probe a given vertex). We prove that the outputs of LCAs with bounded in- and out-queries (in-n-out LCAs for short) have limited correlation, a property that our analysis crucially relies on and might find applications beyond stochastic matchings.

Janani Sundaresan

Title: Distributed Triangle Detection is Hard in Few Rounds

Abstract: In the CONGEST model, $n$ vertices with information only about their neighborhoods are allowed to communicate to each other over the edges of the input graph. Communication happens in synchronous rounds with a bandwidth of $O(\log n)$. We prove that detecting a triangle in this model requires $\Omega(\log \log n)$ rounds. 
Prior to our work, the only lower bound was that at least two rounds are necessary.

It is known that standard communication complexity arguments that have been used to get lower bounds in the CONGEST model in the past are incapable of giving any meaningful multi-round lower bounds for this problem. Our main contribution is a new information-theoretic technique that combines classical round elimination arguments of communication complexity with the point-to-point communication aspects of distributed networks and can be of independent interest.

Joint work with Sepehr Assadi (https://arxiv.org/abs/2504.01802).

Pagination

  • Previous page Previous
  • Page 185
  • Page 186
  • Current page 187
  • Page 188
  • Page 189
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage