Rigorous Systems Research Group Seminar
Non-convex learning has received significant attention in recent years due to its wide applications in various problems. Despite a large amount of work on non-convex learning, two fundamental questions remain unresolved, from computational and security aspects.
From the computational aspects, one of the long-standing questions is designing computationally efficient algorithms toward global optimality of non-convex optimization in polynomial time. In this talk, we will analyze the loss landscape of two classic non-convex problems --- matrix factorization and deep learning. We show that the two problems enjoy small duality gap. Duality gap is a natural measure of non-convexity of a problem. The analysis thus bridges non-convex learning with its convex counterpart and explains why matrix factorization and deep learning are intrinsically not hard to optimize.
From the security aspect, non-convex learning, especially deep neural network, is non-robust to adversarial examples. In this talk, we identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. We quantify the trade-off in terms of the gap between the risk for adversarial examples and the risk for non-adversarial examples, and give an optimal upper bound on this quantity in terms of classification-calibrated loss. Inspired by our theoretical analysis, we design a new defense method, TRADES, to trade adversarial robustness off against accuracy. The proposed algorithm is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of 1,995 submissions, surpassing the runner-up approach by 11.41%.
Joint work with Nina Balcan, Laurent El Ghaoui, Jiantao Jiao, Michael I. Jordan, Yingyu Liang, Ruslan Salakhutdinov, David P. Woodruff, Eric P. Xing, and Yaodong Yu