Abstract The MonPoly project started over a decade ago to build effective tools for monitoring trace properties, including functional correctness, security, and compliance policies. The original goal was to support monitoring in expressive specification languages and handle both the online case, where system events are monitored as they occur, and the offline case, monitoring logs. The original MonPoly tool supported monitoring in first-order metric temporal logic (MFOTL), where events can be stored in finite databases or automatic structures, represented by automata. Since then our tool has evolved into a family of tools and supporting infrastructure to make monitoring both scalable and suitable for high assurance applications. We survey this evolution which includes: (1) developing both more and less expressive variants, e.g., adding aggregation operators, regular expressions, and limited forms of recursion as well as considering more efficiently monitorable fragments and new monitoring algorithms for them; (2) designing support for parallel and distributed monitoring; (3) using theorem proving to verify monitoring algorithms and explore extensions; and (4) carrying out ambitious case studies to learn where bottlenecks and limitations are in practice.
Bio David Basin is a full professor of Computer Science at ETH Zurich, since 2003. His research areas are Information Security and Software Engineering. He is the founding director of the ZISC, the Zurich Information Security Center, which he led from 2003-2011. He served as Editor-in-Chief of the ACM Transactions on Privacy and Security (2015-2020) and of Springer-Verlag's book series on Information Security and Cryptography (2008-present). He has co-founded three security companies, is on the board of directors of Anapaya Systems AG as well as various management and scientific advisory boards. He is an IEEE Fellow and an ACM Fellow.
Abstract The quantification of privacy risks associated with algorithms is a core issue in data privacy. I will introduce a systematic approach to assessing the privacy risks of machine learning algorithms. I will highlight the efforts towards establishing standardized privacy auditing procedures and privacy meter tools, based on membership inference algorithms, to identify vulnerable algorithms and check compliance with privacy regulations. I will also explain the interconnections between this methodology and the concept of differential privacy, and what it means for an algorithm to be robust against inference attacks.
Bio Reza Shokri is a Presidential Young Professor of Computer Science at National University of Singapore. His research focuses on data privacy and fairness in machine learning. He is a recipient of the the Asian Young Scientist Fellowship 2023, Best Paper Award at ACM FAccT 2023, IEEE Security and Privacy (S&P) Test-of-Time Award 2021, VMWare Early Career Faculty Award 2021, NUS Early Career Research Award 2019, the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies 2018, and Faculty Research Awards from Meta, Intel, and Google. He obtained his PhD from EPFL.
Abstract There is increasing interest in the use of machine learning technologies for safety and mission-critical applications, e.g., deep neural networks for perception in self-driving road vehicles, but the relevant regulations that would give the bases for assurance have not kept pace. Risk-based approaches to trust, in the form of argument-based safety cases, have shown promise for the assurance and subsequent operational approval of novel systems. We describe work on the dynamic assurance case (DAC) concept - a model-based, multifaceted approach to the assurance of safety-critical systems - and its application to several ML-based autonomous systems. Our vision is one of a rich, expressive, and rigorously-founded framework, going well beyond how argument-based safety cases are currently developed.