www.caltech.edu Events and Seminarshttps://www.caltech.edu/campus-life-events/calendar/rss/en-usWed, 21 Feb 2024 12:00:00 -0800Transitions between harmful, benign and no overfitting in neural networkshttps://www.caltech.edu/campus-life-events/calendar/cmx-lunch-seminar-36Michael Murray, Hedrick Assistant Adjunct Professor, Department of Mathematics, University of California Los Angeles
We will discuss benign overfitting in two-layer ReLU networks trained using gradient descent and hinge loss on noisy data for binary classification. In particular, we consider linearly separable data for which a relatively small proportion of labels are corrupted or flipped. We identify conditions on the margin of the clean data that give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly. Our analysis provides a fine-grained description of the dynamics of neurons throughout training and reveals two distinct phases: in the first phase clean points achieve close to zero loss, in the second phase clean points oscillate on the boundary of zero loss while corrupt points either converge towards zero loss or are eventually zeroed by the network. We prove these results using a combinatorial approach that involves bounding the number of clean versus corrupt updates across these phases of training. Wed, 21 Feb 2024 12:00:00 -0800CMX Lunch Seminar@Wed Feb 21 12:00:00 2024@main.oscweb.caltech.eduCMX Lunch Seminar