Wednesday, August 10, 2022 - 2:00pm to 3:00pm
Location:Virtual Presentation - ET Remote Access - Zoom
Speaker:CHEN DAN, Ph.D. StudentComputer Science DepartmentCarnegie Mellon University https://chendancmu.github.io/
Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification
Adversarial robustness has become a fundamental requirement in modern machine learning applications. Yet, there has been surprisingly little statistical understanding so far. In this work, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under a Gaussian mixture model studied by Schmidt et al. 2018. The results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting. We establish an excess risk lower bound and design a computationally efficient estimator that achieves this optimal rate. Our results built upon a minimal set of assumptions while covering a wide spectrum of adversarial perturbations including L_p balls for any p>1. Joint work with Yuting Wei and Pradeep Ravikumar.
Presented in Partial Fulfillment of the CSD Speaking Skills Requirement. Zoom Participation. See announcement.