Chen Dan Statistical Learning Under Adversarial Distribution Shift Degree Type: Ph.D. in Computer Science Advisor(s): Pradeep Ravikumar Graduated: August 2022 Abstract: One of the most fundamental assumptions in statistical machine learning is that training and testing data should be sampled independently from the same distribution. However, modern real world applications require that the learning algorithm should perform robustly even when this assumption is no longer valid. Specifically, the training and testing distributions may shift slightly (yet adversarially) within a small neighborhood of each other. This formulation encompasses many new challenges in machine learning, including adversarial examples, outlier contaminated data, group fairness and label imbalance. In this thesis, we seek to understand the statistical optimality and provide better algorithms under the aforementioned adversarial distribution shift. Our contributions include (1) the first near optimal minimax lower bound for the sample complexity of adversarially robust classification in a Gaussian setting. (2) introducing the framework of distributional and outlier robust optimization, which allowed us to apply distributionally robust optimization to large scale experiments with deep neural networks and outperformed existing methods in sub-population shift tasks. (3) margin sensitive group risk, a principled way of improving distributional robust generalization via group-asymmetric margin maximization. Thesis Committee: Pradeep Ravikumar (Chair) Zico Kolter Zachary Lipton Avrim Blum (Toyota Technological Institute in Chicago) Yuting Wei (University of Pennsylvania) Srinivasan Seshan, Head, Computer Science Department Martial Hebert, Dean, School of Computer Science Keywords: Machine Learning, Statistical Learning Theory, Robustness CMU-CS-22-127.pdf (2.57 MB) ( 184 pages) Copyright Notice