One might recall that one of the inaugural programs hosted by the Simons Institute, back in Fall 2013, was Real Analysis in Computer Science. In the...
Greetings from Berkeley! Summer programs are in full swing at the Simons Institute and it’s been great to see and catch up with many friends from near...
Three years ago, in August 2020, the Simons Institute co-hosted a workshop on Decoding Communication in Nonhuman Species. Looking back at the titles of the talks, none used the AI buzzwords that have become household terms in the course of the last several months: ChatGPT, large language models, generative AI, chatbots. Yet the technology these terms refer to is central to the task at hand. The idea of the workshop was to apply cutting-edge methods from the field of natural language processing, especially large language models, to animal communication. This past June, the Institute co-hosted a follow-up workshop, Decoding Communication in Nonhuman Species II. Now that the technology is advancing at a breathtaking pace, this second workshop afforded participants an opportunity to take a look at how these efforts have advanced in the last three years — and whether the field has produced new tools that can help researchers understand what animals are talking about.
In this talk from the recent workshop on Decoding Communication in Nonhuman Species, Bryan Pardo (Northwestern) presents work applying iterative decoding and acoustic token modeling to music audio synthesis. The outputs of this procedure can range from a high-quality audio compression technique to variations on the original input music that match the original input music in terms of style, genre, beat and instrumentation, while varying specifics of timbre and rhythm.
Nima Anari (Stanford) and collaborators obtain the first polylogarithmic-time sampling algorithms for determinantal point processes, directed Eulerian tours, and more.
In a series of talks in the boot camp for this summer's program on Analysis and TCS, Dor Minzer (MIT) surveyed recent developments in PCPs fueled by hyper-contractive estimates for global functions that are not significantly affected by a restriction of a small set of coordinates.
One might recall that one of the inaugural programs hosted by the Simons Institute, back in Fall 2013, was Real Analysis in Computer Science. In the decade since, the field has cultivated influential new themes such as global hypercontractivity and spectral independence, incorporated methods based on high-dimensional expanders and stochastic calculus, and also enabled striking applications in hardness of approximation, Markov chain analysis, and coding theory. All this progress makes this an excellent time to reconvene a program on this topic.
Greetings from Berkeley! Summer programs are in full swing at the Simons Institute and it’s been great to see and catch up with many friends from near and far.
In April 2023, the Simons Institute hosted a workshop on Multigroup Fairness and the Validity of Statistical Judgment, the latest in a series of workshops and clusters we’ve organized on the theme of algorithmic fairness, as part of our Algorithms, Society, and the Law initiative.
In this episode of Polylogues, Simons Institute Director Shafi Goldwasser sits down with workshop leader Omer Reingold (Stanford) to explore the key themes of the workshop.
As the prevalence of machine learning expands across diverse domains, the role of algorithms in influencing decisions that significantly impact our lives becomes increasingly important. Concerns regarding the fairness of algorithmic decisions have spurred the proposal and investigation of the framework of multigroup fairness, which provides a mathematical foundation for assessing fairness across numerous overlapping subpopulations.
In this talk in the Simons Institute’s recent workshop on Multigroup Fairness and the Validity of Statistical Judgment, Rachel Lin (University of Washington) elucidates the close relationships among several recently proposed notions of multigroup fairness, namely, multi-accuracy, multi-calibration, and outcome indistinguishability, and concepts of pseudorandomness from complexity theory and cryptography, specifically leakage simulation in cryptography, weak regularity in complexity theory, and graph regularity in graph theory. By exploring these connections, Lin demonstrates that ideas in either area can lead to improvement in the other.