Friday, May 6, 2022 - 4:00pm to 5:30pm
Location:In Person and Virtual - ET McWilliams Classroom, Gates Hillman 4303
Speaker:CHUN KAI LING, Ph.D. StudentComputer Science DepartmentCarnegie Mellon University https://lingchunkai.github.io/
Scalable Learning and Solving of Extensive-Form Games
Strategic interactions between agents can be formalized by game theory. Modern learning techniques coupled with the ubiquity in data and proliferation of compute have paved the way towards computationally solving large games, with broad applications ranging from poker to security. In this thesis proposal, we address two challenging scenarios in game theory—(i) whengame parameters are unspecified, and (ii) when trying to solve large, general-sum extensive-form games. First, we study the inverse problem in game theory, where game parameters are to be learnt given only samples drawn from its quantal response equilibrium. An end-to-end, differentiable game-solving module fully compatible with modern deep learning architectures is presented. This approach allows for the learning of game parameters in two-player normal-form and extensive-form zero-sum games in a scalable manner via gradient descent. Second, we extend state of the art online Nash solvers currently used in zero-sum games to Stackelberg and correlated equilibria in general-sum extensive-form games. In both cases, we present efficient online search algorithms which avoid computation in branches of the game tree not encountered in actual play. These algorithms allow approximate solving of large general-sum games which offline solversstruggle with, while still providing guarantees on performance. Thesis Committee: J. Zico Kolter (Co-chair) Fei Fang (Co-chair) Tuomas Sandholm Vincent Conitzer Michael Bowling (University of Alberta/Deepmind) Additional Information
In Person and Zoom Participation. See announcement.