Modern machine-learning and AI systems are tremendously useful, but they bring with them an array of new privacy, security, and trust concerns. Complicating the situation ever further is that many learning systems today operate in decentralized settings, in which data and computation are distributed across many mutually distrusting parties.
This workshop will focus on privacy, security, and trust issues in decentralized machine-learning systems. The goal will be to bring together experts with a wide range of backgrounds and expertise across the breadth of topics relating to decentralized learning and security. Topics will include secure computation (e.g., training models on distributed private data), the theory and practice of differential privacy, security and privacy attacks on AI systems, emerging concerns (e.g., agentic AI security), and deployments of distributed-learning and private-aggregation systems. The speakers will include a mix of academic researchers and industry practitioners, and will cover a mix of theoretical and applied topics.
If you require special accommodation, please contact our access coordinator at simonsevents@berkeley.edu with as much advance notice as possible.
Please note: the Simons Institute regularly captures photos and video of activity around the Institute for use in videos, publications, and promotional materials.