Machine Learning / Duolingo Seminar

— 11:30am

Location:
In Person and Virtual - ET - Newell-Simon 4305 and Zoom

Speaker:
MAX SIMCHOWITZ , Postdoctoral Researcher, Robot Locomotion Group, Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology
https://msimchowitz.github.io/

A Tale of Two Shifts

Distribution shift between training and test-time deployment is an unavoidable challenge in modern machine learning. In this talk, we will investigate two very different distribution shift phenomena. First, we propose a model for “heterogeneous distribution shift,” in which different features shift by different amounts at test time.  We show that pure empirical risk minimization (ERM) is more resilient to shifts in “simple features", and posit ERM’s ability to adapt to heterogeneous shifts as a possible mechanism for why simple supervised learning is such a strong baseline in distribution shift benchmarks.

Next, we turn to studying distribution shifts induced by deploying learned models in feedback with a dynamic environment. Here, states encountered by the learner at test time diverge from training time due to errors accumulated during execution. We propose a new paradigm for combatting these compounding errors which leverages control theoretic primitives and apply our technique to robotic imitation learning with probabilistic generative models. While heterogenous shift and compounding error may seem like unrelated challenges, we show that these in fact co-exist when deploying trained language models to autoregressively generate text. We conclude with some preliminary prescriptions to combat their twin effects.

Max Simchowitz is a postdoctoral researcher in the Robot Locomotion Group at MIT CSAIL. He studies the theoretical foundations of machine learning problems with a sequential or dynamical component; he currently focuses on robotics and out-of-distribution learning, with past work ranging broadly across control, reinforcement learning, optimization and algorithmic fairness. He received his PhD from the University of California, Berkeley under Ben Recht and Michael I. Jordan, and his work has been recognized with an ICML 2018 Best Paper Award, ICML 2022 Outstanding Paper Award, and RSS 2023 Best Paper Finalist designation.  In Person and Zoom Participation.  See announcement.


Add event to Google
Add event to iCal