Presented By
O’Reilly + Intel AI
Put AI to Work
April 15-18, 2019
New York, NY

Manipulating and Measuring Model Interpretability

Forough Poursabzi-Sangdeh (Microsoft Research NYC)
1:50pm2:30pm Thursday, April 18, 2019
Interacting with AI
Location: Regent Parlor
Secondary topics:  Ethics, Privacy, and Security, Interfaces and UX

Who is this presentation for?

Machine learning scientists, machine learning practitioners, user experience designers for machine learning



Prerequisite knowledge

- Basic knowledge on supervised machine learning - Basic knowledge on human-subject experiments

What you'll learn

In light of recent advances in machine learning, it is likely that in the future more and more decisions will be made with machine learning models as decision aids. As we transition to this future, it is also likely there will be a demand for models that people can interpret. I hope that the presented experiments and results serve as a reminder that these models should be empirically evaluated and the behavior of end users and not the intuitions of modelers, should guide the creation of these interpretable models.


Machine learning is increasingly used to make decisions that affect people’s lives in critical domains like criminal justice, fair lending, and medicine. While most of the research in machine learning focuses on improving the performance of models on held-out datasets, this is seldom enough to convince end-users that these models are trustworthy and reliable in the wild. To address this problem, a new line of research has emerged that focuses on developing interpretable machine learning methods and helping end-users make informed decisions. Despite the growing body of work in developing interpretable models, there is still no consensus on the definition and quantification of interpretability.

In this talk, I will argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. I will talk about a set of controlled user experiments, where we manipulated various design factors in models that are commonly thought to make them more or less interpretable and measured their influence on users’ behavior. Our findings emphasize the importance of studying how models are presented to people and empirically verifying that interpretable models achieve their intended effects on end-users.

Photo of Forough Poursabzi-Sangdeh

Forough Poursabzi-Sangdeh

Microsoft Research NYC

Forough is a post-doctoral researcher at Microsoft Research New York City. She works in the interdisciplinary area of interpretable and interactive machine learning. Forough collaborates with psychologists to study human behavior when interacting with machine learning models. She uses these insights to design machine learning models that humans can use effectively. She is also interested in several aspects of fairness, accountability, and transparency in machine learning and their effect on users’ decision-making process. Forough holds a BE in computer engineering from the University of Tehran and a PhD in computer science from the University of Colorado at Boulder.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)