skip to main content
Caltech

Fooling neural networks: a few new takes on adversarial examples

Friday, September 28, 2018
11:00am to 12:00pm
Add to Cal
Annenberg 213
CMS Special Seminar
Tom Goldstein, Assistant Professor, University of Maryland,

This talk investigates ways that optimization can be used to exploit neural networks and create security risks. I begin reviewing the concept of "adversarial examples," in which small perturbations to test images can completely alter the behavior of neural networks that act on those images.  I introduce a new type of "poisoning attack," in which neural networks are attacked at train time instead of test time. Finally, I ask a fundamental question about neural network security:  Are adversarial examples inevitable?  By approaching this question from a theoretical perspective, I then provide a rigorous analysis of the susceptibility of neural networks to attacks. 

For more information, please contact Sabrina Pirzada by phone at 626-395-2813 or by email at [email protected].