skip to main content
Caltech

CMX Lunch Seminar

Wednesday, February 21, 2024
12:00pm to 1:00pm
Add to Cal
Annenberg 213
Transitions between harmful, benign and no overfitting in neural networks
Michael Murray, Hedrick Assistant Adjunct Professor, Department of Mathematics, University of California Los Angeles,

We will discuss benign overfitting in two-layer ReLU networks trained using gradient descent and hinge loss on noisy data for binary classification. In particular, we consider linearly separable data for which a relatively small proportion of labels are corrupted or flipped. We identify conditions on the margin of the clean data that give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly. Our analysis provides a fine-grained description of the dynamics of neurons throughout training and reveals two distinct phases: in the first phase clean points achieve close to zero loss, in the second phase clean points oscillate on the boundary of zero loss while corrupt points either converge towards zero loss or are eventually zeroed by the network. We prove these results using a combinatorial approach that involves bounding the number of clean versus corrupt updates across these phases of training.

For more information, please contact Jolene Brink by phone at (626)395-2813 or by email at [email protected] or visit CMX Website.