Results 291 - 300 of 23739
Responsibly Improving AI with Privacy-Sensitive...
Brendan McMahan (Google)
Richard M. Karp Distinguished Lecture
Tuesday, February 24
3:30 – 4:30 p.m. PT
Calvin Lab auditorium
A Sloan Research Fellowship is one of the most prestigious awards available to early-career researchers.
This workshop brings together participants from the Special Year on Large Language Models and Transformers program, parts 1 and 2, held in the 2024-25 academic year, as well the Deep Learning Theory and the Interpretable Machine Learning summer clusters...
DNA methylation provides a rich epigenetic signal that reflects both genetic and environmental influences and can potentially be leveraged in multiple ways in medicine. In this talk, I will discuss two complementary directions: using methylation risk scores for disease prediction and for imputing missing phenotypes from electronic health records, and using methylation data for association analysis, where signals of interest are often obscured by tissue heterogeneity. I will describe how dimensionality reduction and deconvolution techniques enable the identification of cell-type–specific disease signals from bulk methylation measurements, and conclude by highlighting open computational questions at the intersection of prediction, interpretability, and heterogeneous biological data.
In recent years, foundation models have grown in prominence within ML/AI, and promise significant benefits in health and biomedical applications, especially on non traditional modalities. In this talk, I will present a formalization of foundation models that provides insight into their relative strengths and weaknesses, and how we can build meaningful new foundation models across health and biomedical applications. In so doing, we will examine emerging foundation models for electronic health record data and identify new algorithms that can offer significant benefits in various health settings.
Two central criteria of group fairness -- error-rate parity and calibration within subgroups -- usually cannot be satisfied at the same time. We introduce a natural way to relax these notions, which reveals the tradeoffs between them and suggests a range of new, optimally fair prediction rules. This relaxation also highlights when some fairness requirements must fail. Finally, we turn to the surprising question of when and how it is even possible to make fair decisions based on fair scores.
Realizing the promise of precision medicine requires resolving meaningful molecular and clinical heterogeneity in ways that generalize across studies and diverse populations. This talk will present new robust computational methods that advance this goal, illustrated through discoveries such as regulatory-like naïve T-cell populations associated with aging, a type 2 asthma sub-endotype linked to differential therapeutic response, and glaucoma patient subgroups with distinct outcomes under first-line treatments.
Early detection significantly improves outcomes across many cancers, motivating major investments in population-wide screening programs, such as low-dose CT for lung cancer. To make screening more effective, we must simultaneously improve early detection for patients who will develop cancer while minimizing the harms of over screening. Advancing this Pareto frontier requires progress across three fronts: (1) accurately predicting patient outcomes from all available data, (2) designing intervention strategies tailored to risk, and (3) evaluating and translating these strategies into clinical practice. In this talk, I will present ongoing work across all three areas, driven by the goal of using every available bit of patient data to personalize cancer care.