skip to main content
Caltech

Caltech Young Investigators Lecture

Monday, May 3, 2021
4:00pm to 5:00pm
Add to Cal
Online Event
Provable Representation Learning: The Importance of Task Diversity and Pretext Tasks
Qi Lei, Princeton,

Abstract:

Modern machine learning models are transforming applications in various domains at the expense of a large amount of hand-labeled data. In contrast, humans and animals first establish their concepts or impressions from data observations. The learned concepts then help them to learn specific tasks with minimal external instructions. Accordingly, we argue that deep representation learning seeks a similar procedure: 1) to learn a data representation that filters out irrelevant information from the data; 2) to transfer the data representation to downstream tasks with few labeled samples and simple models. In this talk, we study two forms of representation learning: supervised pre-training from multiple tasks and self-supervised learning.

Supervised pre-training uses a large labeled source dataset to learn a representation, then trains a simple (linear) classifier on top of the representation. We prove that supervised pre-training can pool the data from all source tasks to learn a good representation that transfers to downstream tasks (possibly with covariate shift) with few labeled examples. We extensively study different settings where the representation reduce the model capacity in various ways. Self-supervised learning creates auxiliary pretext tasks that do not require labeled data to learn representations. These pretext tasks are created solely using input features, such as predicting a missing image patch, recovering the color channels of an image, or predicting missing words. Surprisingly, predicting this known information helps in learning a representation useful for downstream tasks. We prove that under an approximate conditional independence assumption, self-supervised learning provably learns representations that linearly separate downstream targets. For both frameworks, representation learning provably and drastically reduce sample complexity for downstream tasks.

Bio:

Qi Lei is a Computing Innovation Fellow at Princeton ECE department working with Jason Lee. She received her Ph.D. from Oden Institute for Computational Engineering & Sciences at UT Austin in May 2020, advised by Alex Dimakis and Inderjit Dhillon. She was also a member of the Center for Big Data Analytics and the Wireless Networking & Communications Group. She visited the Institute for Advanced Study (IAS) at Princeton for the Theoretical Machine Learning Program for one year. Before that, she was a research fellow at Simons Institute for the Foundations of Deep Learning Program. Her main research interests are machine learning, deep learning, and optimization. Qi has received several awards, including four years of National Initiative for Modeling and Simulation Graduate Research Fellowship, two years of Computing Innovative Fellowship, and Simons-Berkeley Research Fellowship. She also owns several patents.

This talk is part of the Caltech Young Investigators Lecture Series, sponsored by the Division of Engineering and Applied Science.

For more information, please contact Diana Bohler by email at [email protected].