Speaker: Xiyue Zhang
Time: 14:30, Jan. 29th.
Location: SIST 1C-502
Host: Prof. Yuqi Chen
Abstract:
As deep learning (DL) systems become integral to safety-critical domains, from autonomous driving to healthcare, their trustworthiness is more important than ever. Despite remarkable progress, deep neural networks can exhibit instability and remain vulnerable to adversarial perturbations. Addressing these issues requires new techniques that provide robustness guarantees and practical scalability. In this talk, I will introduce key ideas in certification of neural networks and present recent results on robustness verification, testing methodologies for complex models, and adversarial robust learning. I will close by outlining several open challenges and research opportunities for certifying DL models and building trustworthy modern foundation models.
Bio:
Xiyue Zhang is an Assistant Professor in the School of Computer Science at the University of Bristol. Before joining Bristol, she was a Research Associate in the Department of Computer Science at the University of Oxford. She received her PhD in Applied Mathematics in 2022 and her BSc in Information and Computing Science in 2017 from Peking University. Her work focuses on trustworthy deep learning by integrating provable certification and empirical evaluation methods. Her recent work includes abstraction and verification for deep neural networks, as well as practical testing and safety analysis techniques for large-scale models.


沪公网安备 31011502006855号


