Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 1291 - 1300 of 23832

Workshop Talk
|
Sept. 12, 2025

Embracing Chaos: How Instability and Implicit Bias Drive Generalization in Deep Networks

It is well-established that gradient descent (GD) successfully trains deep neural networks that generalize, yet the precise mechanisms remain an active area of research. This presentation delves into the training dynamics of GD, examining how hyperparameters shape the optimization path. We will discuss two key phenomena, implicit regularization and training instability and present research showing how they act as a crucial bias, steering the learning process toward generalizable solutions.

People

Ryan O'Donnell

Ryan O'Donnell is a professor of computer science at Carnegie Mellon University. His research interests include quantum computation and information theory, approximability of optimization problems, spectral graph theory, analysis of Boolean functions...

Workshop Talk
|
Sept. 12, 2025

Sampling Algorithms for Machine Learning with Auxiliary Random Variables and Diffusion Models

The vast majority of training for supervised learning algorithms today is done via optimization, due to the maturity and speed of efficient gradient-based algorithms for implementation. A Bayesian approach to machine learning has been largely abandoned due to the failure of traditional Markov Chain Monte Carlo (MCMC) algorithms to sample from complex posterior distributions in a reasonable amount of time.

At the same time, in recent years diffusion models have proven to be a remarkably effective generative algorithm for sampling from complex and high dimensional target densities in examples such as Stable Diffusion and DALL-E. Additionally, many of the best MCMC algorithms we have today make use of auxiliary random variables, creating joint distributions between “target” variables we want to be sampling and “auxiliary” variables that are designed by the practitioner to facilitate sampling the target variable. Examples include Hamiltonian Monte Carlo with a “momentum” variable, simulated tempering with a “temperature” variable, and proximal sampling.

Combining these two ideas, diffusion models can be viewed as a mathematical object defining a series of auxiliary random variables coupled with the target variable that can facilitate sampling. Under certain conditions, we can demonstrate a mixture representation of the target density where the mixture components are well suited to traditional MCMC sampling. This structure has been named a “Log-Concave Coupling” in past research by this author. Under such a structure, a sample of the auxiliary random variable from its marginal density, followed by a sample of the target random variable from its conditional density can be accomplished efficiently by traditional MCMC methods such as Langevin Diffusion. This produces an equivalent sample of the target variable from its original target density, while only having to sample from log-concave densities in each step.

Workshop
|
December 7, 2026, 9:00 am - December 11, 2026, 5:00 pm
Complexity and Linear Algebra Reunion

This reunion workshop is for long-term participants in the program " Complexity and Linear Algebra," held in the fall 2025 semester. It will provide an opportunity to meet old and new friends. Moreover, we hope that it will give everyone a chance to...

Workshop
|
September 12, 2025, 1:00 pm - September 12, 2025, 3:00 pm
Meet the Fellows Welcome Event Fall 2025

A welcome event for all new Simons fellows to introduce them to the Simons Institute community. All new fellows will present a 10-minute talk followed by 5 minutes for Q&A with the aim of making introductions to each other, program participants, and the...

Workshop Talk
|
Sept. 12, 2025

Talk by

Workshop Talk
|
Sept. 12, 2025

Advancing Digital Design: Challenges and Solutions

The unprecedented performance levels and energy needs of AI and ML applications require new approaches to both technology choices and to the related design methods.
Diversity and heterogeneous integration will play a major role in the realization of future systems. Nevertheless, emerging technologies can be leveraged only if supported by the corresponding design tools and flows, that may depart significantly from those used for CMOS.
While various technologies and computing paradigms have different goals and constraints, some design solutions may leverage the same fundamental properties of digital design. This brings logic synthesis and optimization again at the forefront of research as a main enabler of innovative and disruptive digital system design.

Workshop Talk
|
Sept. 11, 2025

Automated design space exploration and generation of AI accelerators

Designing high performance and energy efficient AI accelerators requires significant engineering effort, and as the rapidly evolving field of machine learning develops new models, the current approach of designing ad hoc accelerators does not scale. In this talk, I will present our ongoing research on a high-level synthesis (HLS)-based framework for design space exploration and generation of hardware accelerators for AI. Given architectural parameters, such as datatype, scaling granularity, compute parallelism and memory sizes, the framework generates a performant fabrication-ready accelerator. Accelerators generated through this framework have been taped out in several chips, targeting various workloads including convolutional neural networks and transformer networks. In this talk, I will present the generator framework, and show how we can also use it as a benchmarking tool for designs leveraging emerging technologies.

Workshop Talk
|
Sept. 11, 2025

Talk by

Workshop Talk
|
Sept. 11, 2025

Real In-Memory Processing

Pagination

  • Previous page Previous
  • Page 128
  • Page 129
  • Current page 130
  • Page 131
  • Page 132
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage