Monday, January 24, 2022 - 12:00pm to 1:30pm
Location:Virtual Presentation - ET Remote Access - Zoom
Speaker:KLAS LEINO, Ph.D. StudentComputer Science DepartmentCarnegie Mellon University http://www.cs.cmu.edu/~kleino/
Training Provably Robust Neural Networks
Deep networks have extensively been shown to be vulnerable to maliciously-perturbed inputs, termed "adversarial examples," through which an attacker can cause the network to make arbitrary mistakes. This raises concerns for neural networks deployed in the wild, especially in safety-critical settings, e.g., in autonomous vehicles. In turn, this has motivated a volume of work on practical defenses; among these, this talk will focus primarily on rigorous defenses that provide provable guarantees of a property termed "local robustness," which precludes an important class of adversarial examples. Specifically, we will cover an elegant and effective defense that modifies the architecture of a neural network to naturally provide provable guarantees of local robustness without introducing overhead to the network's inferences. Finally, we will discuss some of the limitations of local robustness, demonstrating that in some contexts it is too stringent (we propose some natural relaxations), while in other ways it is insufficient to capture certain realistic forms of attacks.
Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.
Zoom Particpation. See announcement.