Results 2111 - 2120 of 23898
Eddy is a first year PhD student in the School of computational science and engineering at Georgia Institute of Technology advised by Helen Xu. He completed his undergraduate degree in the Computer Science Specialist program at the University of Toronto in...
Fulvio Gesmundo is a Junior Professor at the Institut de Mathématiques de Toulouse. His research interest is in Algebraic Geometry Algebraic Geometry and Representation Theory, with a focus toward problems originating in Theoretical Computer Science and...
Khashayar is a final-year PhD student in the EECS department at MIT, co-advised by Stefanie Jegelka and Jonathan A. Kelner. His research spans machine learning and optimization, with a focus on improving the efficiency and reliability of our training...
Jordan Docter is currently at Stanford University and their research interests are quantum algorithms and quantum cryptography.
Yuki Shirakawa is a graduate student at Kyoto University. His research interests are quantum cryptography and quantum advantage.
Welcome, Summer 2025 Visitors!
Check out our Visitor Guide for information to help you make the most of your stay.
Algorithmic recourse provides individuals who received an undesirable outcome by machine learning models with suggestions of minimum-cost improvements they can make to achieve a desirable outcome in the future. However, machine learning models often get updated over time and this can cause a recourse to become invalid (i.e., not lead to the desirable outcome). The robust recourse literature aims to choose recourses that are less sensitive, even against adversarial model changes, but this comes at a higher cost. To overcome this obstacle, we initiate the study of algorithmic recourse through the learning-augmented framework and evaluate the extent to which a designer equipped with a prediction regarding future model changes can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.
Synthesizing "explainable" interpretations of black-box ML models is often crucial for reasoning about their behaviour. Decision trees and decision diagrams have typically been considered good for explanations, and parameters like depth, count of decision nodes, preference of decision predicates etc. have been used to quantify their "explainability" in different contexts. Almost inevitably, there is a tension between such explainability metrics and the accuracy of the interpretation. Simpler models score high on explainability, but may not faithfully mimic the behaviour of the model; similarly, more complex (and less explainable) models typically do a better job of mimicing the behaviour of a black box model. In this talk, we will discuss how Pareto-optimal interpretations of such black-box models can be systematically synthesized for a large class of explainability metrics using an optimized search based on MaxSAT solving. We provide PAC-style guarantees on the synthesized interpretations, and apply this technique to synthesize Pareto-optimal interpretations of some benchmarks. Our results show that there are often several Pareto-optimal interpretations that are easy to miss if one optimizes a single scalar objective function that combines accuracy and explainability metrics.
This is joint work with Hazem Torfah, Shetal Shah, Sanjit Seshia and S. Akshay