Machine Learning: A Robustness Perspective

Machine learning has made tremendous progress over the last decade. However, these leaps in performance have mostly focused on average, or even best case, performance.

How well do modern machine learning methods perform from more of a worst case perspective?

In this talk I will discuss the challenges behind making ML systems robust, reliable, and secure. I will start by surveying the widespread vulnerabilities of state-of-the-art ML models in adversarial settings. Then, I will outline both promising approaches for alleviating these deficiencies and our current understanding of the theoretical underpinnings of ML robustness.


  1. Towards Deep Learning Models Resistant to Adversarial Attacks, A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, ICLR 2018, Also, see this link and subsequent posts.

  2. Adversarially Robust Generalization Requires More Data, L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, A. Madry, NeurIPS 2018,

  3. Robustness May Be at Odds with Accuracy, D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, A. Madry, ICLR 2019,

  4. Adversarial Examples Are Not Bugs, They Are Features, A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, A. Madry, NeurIPS 2019, Also, see this link.

Aleksander Madry

Aleksander Madry is an Associate Professor of Computer Science in the MIT EECS Department and a Principal Investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent some time at Microsoft Research New England and on the faculty of EPFL.

Aleksander’s research interests span algorithms, continuous optimization, science of deep learning, and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.