Welcome to the
Institute of Mathematics
- Intranet -


Event

David Stutz (MPI for Informatics, Saarbrücken): Relating Adversarial Robustness and Weight Robustness Through Flatness

Ort: MPI für Mathematik in den Naturwissenschaften Leipzig, Videobroadcast

Video broadcast: Math Machine Learning seminar MPI MIS + UCLA Despite their outstanding performance, deep neural networks (DNNs) are susceptible to adversarial examples, imperceptibly perturbed examples causing mis-classification. Similarly, but less studied, DNNs are fragile in terms of perturbations in their weights. This talk highlights my recent research on both input and weight robustness and investigates how both problems are related. On the subject of adversarial examples, I discuss a confidence-calibrated version of adversarial training that allows to obtain robustness beyond the adversarial perturbations seen during training. Next, regarding weight robustness, I address robustness against random bit errors in the (quantized) weights which plays an important role in improving the energy-efficiency of DNN accelerators. Surprisingly, improved weight robustness can also be beneficial in terms of robustness against adversarial examples. Specifically, weight robustness can be thought of as flatness in the loss landscape with respect to perturbations of the weights. Using an intuitive flatness measure for adversarially trained DNNs, I demonstrate that flatness in the weight loss landscape improves adversarial robustness and helps to avoid robust overfitting. Bio: David Stutz is a final-year PhD student at the Max Planck Institute for Informatics supervised by Prof. Bernt Schiele and co-supervised by Prof. Matthias Hein from the University of Tübingen. He obtained his bachelor and master degrees in computer science from RWTH Aachen University. During his studies, he completed an exchange program with the Georgia Institute of Technology as well as several internships at Microsoft, Fyusion and Hyundai MOBIS, among others. He wrote his master thesis at the Max Planck Institute for Intelligent Systems supervised by Prof. Andreas Geiger. His PhD research focuses on obtaining robust deep neural networks, considering adversarial examples, corrupted examples or out-of-distribution examples. In a collaboration with IBM Research, subsequent work improves robustness against bit errors in (quantized) weights to enable energy-efficient and secure accelerators. This work was awarded an outstanding paper award at the CVPR CV-AML Workshop 2021. More recently, during an internship at DeepMind, he used conformal prediction for uncertainty estimation in medical diagnosis. He received several awards and scholarships including the Qualcomm Innovation Fellowship, RWTH Aachen University's Springorum Denkmünze and the STEM Award IT sponsored by ZF Friedrichshafen. His work has been published at top venues in computer vision and machine learning including ICCV, CVPR, IJCV, ICML and MLSys. More information can be found at www.davidstutz.de.

No Attachment

Beginn: Oct. 21, 2021, 5 p.m.

Ende: Oct. 21, 2021, 6:30 p.m.