skip to main content
Caltech

H.B. Keller Colloquium

Monday, November 18, 2019
4:00pm to 5:00pm
Add to Cal
Annenberg 105
Thompson Sampling for learning in online decision making
Shipra Agrawal, Assistant Professor, Department of Industrial Engineering and Operations Research, and Data Science Institute, Columbia University,


Modern online marketplaces feed themselves. They rely on historical data to optimize content and user-interactions, but further, the data generated from these interactions is fed back into the system and used to optimize future interactions. As this cycle continues, good performance requires algorithms capable of learning actively through sequential interactions, systematically experimenting to improve future performance, and balancing this experimentation with the desire to make decisions with most immediate benefit. Thompson Sampling is a surprisingly simple and flexible Bayesian heuristic for handling this exploration-exploitation tradeoff in online decision problems. While this basic algorithmic technique can be traced back to 1933, the last five years have seen an unprecedented growth in the theoretical understanding as well as commercial interest in this method. In this talk, I will discuss our work in design and analysis of Thompson Sampling based algorithms for several classes of multi-armed bandits, online assortment selection, and reinforcement learning problems. Our work provides some of the first theoretical bounds on Thompson sampling, and demonstrates that natural versions of this heuristic achieve near-optimal regret bounds for many of these problems.

This talk is based on several joint research works with Vashist Avadhanula, Navin Goyal, Vineet Goyal, Randy Jia, and Assaf Zeevi.