Simina Branzei (Purdue University)
Note: The event time listed is set to Pacific Time.
It is typically expected that if a mechanism is truthful, then the agents would, indeed, truthfully report their private information. But why would an agent believe that the mechanism is truthful? We wish to design truthful mechanisms, whose truthfulness can be verified efficiently (in the computational sense). Our approach involves three steps: (i) specifying the structure of mechanisms, (ii) constructing a verification algorithm, and (iii) measuring the quality of verifiably truthful mechanisms. We demonstrate this approach using a case study: approximate mechanism design without money for facility location." Joint work with Ariel D. Procaccia.
<p>To register for this event and receive the Zoom link, please email organizers <a href="mailto:firstname.lastname@example.org?subject=Inquiry%20to%20register%20for%20Interpretable%20Machine%20Learning%20event%20June%2029%2C%202020">Shai Ben-David</a> or <a href="mailto:email@example.com?subject=Inquiry%20to%20register%20for%20Interpretable%20Machine%20Learning%20event%20June%2029%2C%202020">Ruth Urner</a>.</p>