Theory Lunch Seminar - Natalie Collina
— 1:00pm
Location:
In Person
-
Gates Hillman 8102
Speaker:
NATALIE COLLINA
,
Ph.D. Student in Computer Science, Department of Computer and Information Sciences, University of Pennsylvania
https://www.seas.upenn.edu/~ncollina/
As AI systems become more capable, a central challenge is designing them to work effectively with humans. I will first consider collaborative prediction, motivated by a doctor consulting an AI that shares the goal of accurate diagnosis. Even when the doctor and AI have only partial and incomparable knowledge, repeated interaction enables richer forms of collaboration: we give distribution-free guarantees that their combined predictions are strictly better than either alone, with regret bounds against benchmarks defined on their joint information. I will then revisit the alignment assumption itself. If an AI is developed by, say, a pharmaceutical company with its own incentives, how can we encourage helpful behavior? A natural scenario is that the doctor has access to multiple models, each from a different provider. Under a mild ‘market alignment’ assumption—that the doctor’s utility lies in the convex hull of the providers’ utilities—we show that in Nash equilibrium of this competition, the doctor can achieve the same outcomes as if a perfectly aligned provider were present.
Based on joint work: Tractable Agreement Protocols (STOC’25), Collaborative Prediction (SODA’26), and Emergent Alignment via Competition (in submission).
For More Information:
hfleisch@andrew.cmu.edu