Imitation learning is a branch of machine learning that pertains to learning to make (a sequence of) decisions given demonstrations and/or feedback. Canonical settings include self-driving cars and playing games. When scaling up to complex state/action spaces, one major challenge is how best to incorporate structure into the learning process. For instance, the complexity of unstructured imitation learning can scale very poorly w.r.t. the naive size of the state/action space.
In this talk, I will describe recent and ongoing work in developing principled structured imitation learning approaches that can exploit interdependencies in the state/action space, and achieve orders-of-magnitude improvements in learning rate or accuracy, or both. These approaches are showcased on a wide range of (often commercially deployed) applications, including modeling professional sports, laboratory animals, speech animation, and expensive computational oracles.