Presented By O’Reilly and Intel AI
Put AI to work
8-9 Oct 2018: Training
9-11 Oct 2018: Tutorials & Conference
London, UK

Designing user interfaces for AI for unbiased decision making

Rachel Bellamy (IBM Research), Casey Dugan (IBM Research)
14:35–15:15 Wednesday, 10 October 2018
Interacting with AI
Location: King's Suite - Balmoral Level: Beginner
Secondary topics:  Ethics, Privacy, and Security, Interfaces and UX

Who is this presentation for?

  • User interface designers and AI user interface developers

Prerequisite knowledge

  • Familiarity with AI and machine learning

What you'll learn

  • Understand how to think about designing user interfaces for AI decision support applications

Description

Data bias is not only an AI problem; it’s also a UI problem. Non-AI experts use custom application interfaces to help them make decisions based on predictions from machine learning models. These application interfaces need to be designed so that the decisions made are unbiased.

Rachel Bellamy and Casey Dugan explain how the design of the user interface to AI model-based decision-support tools can help reduce bias in the decision making of non-AI experts who use them and share a design study on how to represent predictions about a criminal defendant’s likelihood of reoffending so that people viewing them can recognize if they are fair.

Throughout the US, judges and parole officers often use algorithms to assess a defendant’s likelihood of reoffending. These algorithms take in data about the defendant like their race, age, and prior arrests. Various models are learned from this data and result is a binary decision of “will” or “will not reoffend” for each defendant. This decision has real implications for the defendant’s life, since this assessment can impact their sentence or parole length.

Every state except Alaska uses some form of risk assessment in their criminal justice system, whether a commercial risk assessment like the software COMPAS (ref) or something developed in-house. Sometimes the method or purpose varies within the state, and not all judges within a state will use this to impact their decision. As more researchers learn about this risk assessment software, more and more research is being done to evaluate the ethics and accuracy, as well as to invent techniques for discrimination discovery and prevention.

In most cases however, these models are not the ultimate arbitrator but are embedded in decision-support applications used by a non-machine learning expert such as a judge or a parole officer. It’s the responsibility of the designers of these applications to ensure that the end users making decisions that use model predictions can tell whether it is biased or not.

What people will think depends on what the results are and how they are communicated. In this preliminary work, Rachel and Casey built on work by their AI research colleagues Calmon, Wei, Vinzamuri, Ramamurthy, and Varshney (2017), who have developed a novel probabilistic formulation of data preprocessing for reducing discrimination. They propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. They applied their preprocessing to the COMPAS risk assessment software in order to improve the fairness in the model’s binary decision across groups of defendants. They then trained two models, a decision tree and a logistic regression model on the preprocessed and original datasets. Rachel and Casey used predictions from these two models in their user interface design study.

For the study, they created explanations in the form of visual representations (decision trees and relative factor weighting graphs) of each model type and example predictions from the preprocessed and original datasets. Feedback from more than 100 users suggests that both representations help people judge the preprocessed dataset to be fairer, but only the decision tree made them trust their decision.

Photo of Rachel Bellamy

Rachel Bellamy

IBM Research

Rachel Bellamy is a principal research scientist and manages the Human-AI Collaboration Group at the IBM T.J. Watson Research Center in Yorktown Heights, New York, where she leads an interdisciplinary team of human-computer interaction experts, user experience designers, and user experience engineers. Previously, she worked in the Advanced Technology Group at Apple, where she conducted research on collaborative learning and led an interdisciplinary team that worked with the San Francisco Exploratorium and schools to pioneer the design, implementation, and use of media-rich collaborative learning experiences for K–12 students. She holds many patents and has published more than 70 research papers. Rachel holds a PhD in cognitive psychology from the University of Cambridge and a BS in psychology with mathematics and computer science from the University of London.

Photo of Casey Dugan

Casey Dugan

IBM Research

Casey Dugan is the manager of AI experiences at IBM Research in Cambridge. She interned with the IBM Cambridge team twice before joining full time. In her time at IBM, she’s worked on projects like Malibu, Beehive (Social Blue), Blog Muse, TimeSquare/Timeflash, and Social Pulse. Her latest projects are the #selfiestation and the Meeting Room of the Future. She graduated from MIT.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)