In Person and Virtual - Blelloch-Skees Conference Room, Gates Hillman 8115 and Zoom
ABHRADEEP GUHA THAKURTA , Staff Research Scientist, Google Research - Brain Team
Federated Learning with Formal User-Level Differential Privacy Guarantees
In this talk I will discuss the algorithmic research that led to the deployment of the first production ML model using federated learning with a rigorous differential privacy guarantee. Along the way, I will highlight the systemic challenges that drove the algorithmic design.
The talk will primarily focus on the DP-FTRL algorithm (a differentially private variant of follow-the-regularized-leader), that was developed during this research effort. I will provide both theoretical and empirical insights into the efficacy of DP-FTRL. In particular, I will show that DP-FTRL compares favorably to DP-SGD (differentially private stochastic gradient descent), but does not rely on privacy amplification by sampling (a crucial component in providing strong privacy/utility trade-offs while operating with minibatch gradients). In comparison to DP-SGD, this allows DP-FTRL to be amenable to more flexible data access patterns, which is crucial in our federated learning deployment.
— Abhradeep Guha Thakurta is a staff research scientist at Google Research - Brain Team. His primary research interest is in the intersection of data privacy and machine learning. He focuses on demonstrating, both theoretically and in practice, that it is possible to design differentially private learning algorithms that can scale to industrial workloads. Prior to Google, Abhradeep was a faculty at UC Santa Cruz. And even before that he worked at Apple and Yahoo Labs as research scientists. He did his Ph.D. from The Pennsylvania State University in 2013, and his postdoctoral appointment was with Stanford University and Microsoft Research Silicon Valley Campus.
In Person and Zoom Participation. See announcement.