Results 301 - 310 of 23740
Early detection significantly improves outcomes across many cancers, motivating major investments in population-wide screening programs, such as low-dose CT for lung cancer. To make screening more effective, we must simultaneously improve early detection for patients who will develop cancer while minimizing the harms of over screening. Advancing this Pareto frontier requires progress across three fronts: (1) accurately predicting patient outcomes from all available data, (2) designing intervention strategies tailored to risk, and (3) evaluating and translating these strategies into clinical practice. In this talk, I will present ongoing work across all three areas, driven by the goal of using every available bit of patient data to personalize cancer care.
Healthcare systems rely on many critical resources that are inherently scarce, including specialist appointments, advanced medical imaging, and complex genetic tests. Since these resources aim to improve patient outcomes, they are typically allocated to patients assessed as being at the highest risk. However, when there are information gaps between different populations, such as lack of medical history, these allocation decisions may unintentionally exclude entire subgroups from receiving care.
In this talk, we review the existing approaches to fairness, both from algorithmic fairness and from resource allocation. We then examine their limitation when applied to extremely resource-constrained healthcare settings, contexts they were not originally designed for. Finally, we discuss new approaches to fairness focusing in the resource-limited environments.
Machine-learning models are now routinely used to guide clinical decisions, allocate scarce resources, and assess patient risk. These systems raise well-motivated concerns about fairness, especially when performance varies across demographic or clinically meaningful subgroups. “Fairness” and “accuracy” are often framed as competing objectives in which social goals must come at the expense of predictive performance. Yet contemporary research in algorithmic fairness shows that this tradeoff does not hold for every definition of fairness.
In this talk, we will explore how fairness notions such as multicalibration and related indistinguishability-based definitions can, in fact, improve the reliability and robustness of predictive models. Multicalibration bridges the gap between actuarial (group-level) and clinical (individual-level) risk analysis by requiring predictions to be statistically valid across rich families of potentially overlapping subpopulations. We will discuss how these guarantees lead to models that are robust to distribution shifts and adaptable to changing downstream objectives and constraints without retraining.
Finally, we will highlight specific scenarios where fairness requirements do introduce genuine tensions with predictive accuracy and discuss why such tradeoffs may be unavoidable and raise policy decisions that healthcare systems must grapple with.
Data science underpins modern AI and many advances in healthcare, yet human judgment permeates every stage of the data science life cycle. These judgment calls introduce hidden uncertainties that go well beyond sampling variability and drive many of the risks associated with AI.
We introduce veridical data science, grounded in three fundamental principles—Predictability, Computability, and Stability (PCS)—to make such uncertainties explicit and assessable and to aggregate reality-checked algorithms for better results. The PCS framework unifies and extends best practices in statistics and machine learning and is illustrated through healthcare applications, including identifying genetic drivers of heart disease, reducing cost of prostate cancer detection, and improving uncertainty quantification beyond standard conformal prediction.
Background: Clinical trials mature from Phase I to Phase IV. The delay in licensure of new drugs for children and pregnant women is as long as 10 years post licensure for adult populations. Hypothetical use of EMRs for data retrieval along with use of AI to both streamline development of clinical protocols and data sets for submission to FDA and EMA are needed
Methods: Pediatric Investigation Plans (PIP) to EMA and Pediatric Study Plan (PSP) to FDA timing to development of clinical trials suggest long delays. The IMPAACT network protocols and planning will be used to demonstrate gaps in clinical trial and licensure submission
Results: Data from the past 10 years of IMPAACT protocol development to licensure will be presented. Gaps and delays across TB and HIV treatment studies will be discussed focusing on study oversight, interim analyses, trial master files and regulatory submissions.
Conclusions: Lack of creative planning for changes in drug availably and new guidelines suggest that more timely and better alternatives that include AI and EMR tools for clinical trial development are needed.