Machine Learning: A Robustness Perspective
Aleksander Mądry
(MIT); tutorials: Maciej Satkiewicz and Piotr Wygocki
Course Summary |
About the lecturer |
Location and schedule |
Materials |
Videos
| Assignment
Machine learning has made tremendous progress over the last decade. It's thus tempting to believe that ML techniques are a "silver bullet", capable of making progress on any real-world problem they are applied to.
But is that really so?
In this series of lectures (and exercises/labs), we will discuss a major challenge in the real-world deployment of ML: making ML solutions robust, reliable and secure. In particular, we will focus on the phenomenon of widespread vulnerability of state-of-the-art ML models to various forms of adversarial noise. We will then examine the roots of this vulnerability as well as potential approaches to mitigating it.
Finally, we will study how features extracted by neural networks may diverge from those used by humans, and how robust models might help in bridging this gap.
The topics we plan to cover:
Lecture 1: Intro, adversarial examples, data poisoning and backdoor attacks, robust training.
Lecture 2: Theoretical perspective on standard vs robust machine learning, the roots of adversarial non-robustness.
Lecture 3: Robustness beyond robustness: human-vision alignment and biases in the datasets.
About the lecturer: Aleksander Madry is a Professor of Computer Science at MIT and leads the MIT Center for Deployable Machine Learning. His research interests span algorithms, continuous optimization, science of deep learning, and understanding machine learning from a robustness and deployability perspectives. Aleksander's work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and Presburger Award. He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent time at Microsoft Research New England and on the faculty of EPFL.Location and schedule:
ZOOM. The lectures will be recorded and posted below after each lecture.
|
|
|