Data bias is not only an AI problem; it’s also a UI problem. Non-AI experts use custom application interfaces to help them make decisions based on predictions from machine learning models. These application interfaces need to be designed so that the decisions made are unbiased.
Rachel Bellamy and Casey Dugan explain how the design of the user interface to AI model-based decision-support tools can help reduce bias in the decision making of non-AI experts who use them and share a design study on how to represent predictions about a criminal defendant’s likelihood of reoffending so that people viewing them can recognize if they are fair.
Throughout the US, judges and parole officers often use algorithms to assess a defendant’s likelihood of reoffending. These algorithms take in data about the defendant like their race, age, and prior arrests. Various models are learned from this data and result is a binary decision of “will” or “will not reoffend” for each defendant. This decision has real implications for the defendant’s life, since this assessment can impact their sentence or parole length.
Every state except Alaska uses some form of risk assessment in their criminal justice system, whether a commercial risk assessment like the software COMPAS (ref) or something developed in-house. Sometimes the method or purpose varies within the state, and not all judges within a state will use this to impact their decision. As more researchers learn about this risk assessment software, more and more research is being done to evaluate the ethics and accuracy, as well as to invent techniques for discrimination discovery and prevention.
In most cases however, these models are not the ultimate arbitrator but are embedded in decision-support applications used by a non-machine learning expert such as a judge or a parole officer. It’s the responsibility of the designers of these applications to ensure that the end users making decisions that use model predictions can tell whether it is biased or not.
What people will think depends on what the results are and how they are communicated. In this preliminary work, Rachel and Casey built on work by their AI research colleagues Calmon, Wei, Vinzamuri, Ramamurthy, and Varshney (2017), who have developed a novel probabilistic formulation of data preprocessing for reducing discrimination. They propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. They applied their preprocessing to the COMPAS risk assessment software in order to improve the fairness in the model’s binary decision across groups of defendants. They then trained two models, a decision tree and a logistic regression model on the preprocessed and original datasets. Rachel and Casey used predictions from these two models in their user interface design study.
For the study, they created explanations in the form of visual representations (decision trees and relative factor weighting graphs) of each model type and example predictions from the preprocessed and original datasets. Feedback from more than 100 users suggests that both representations help people judge the preprocessed dataset to be fairer, but only the decision tree made them trust their decision.
Rachel Bellamy is a principal research scientist and manages the Human-AI Collaboration Group at the IBM T. J. Watson Research Center, where she leads an interdisciplinary team of human-computer interaction experts, user experience designers, and user experience engineers. Previously, she worked in the Advanced Technology Group at Apple, where she conducted research on collaborative learning and led an interdisciplinary team that worked with the San Francisco Exploratorium and schools to pioneer the design, implementation, and use of media-rich collaborative learning experiences for K–12 students. She holds many patents and has published more than 70 research papers. Rachel holds a PhD in cognitive psychology from the University of Cambridge and a BS in psychology with mathematics and computer science from the University of London.
Casey Dugan is the manager of the AI Experience Lab at IBM Research in Cambridge. Her group is an interdiscipinary team made up of designers, engineers, and HCI researchers. They design, build, and study systems at the intersection of HCI & AI, especially human-AI interaction. She has worked in the research areas of social media, analytics and visualization dashboards, human computation and crowdsourcing, and recommender systems since joining IBM. Her projects have ranged from designing meeting rooms of the future to studying #selfiestations, or kiosks for taking selfies at IBM labs around the world. She earned a couple of degrees from MIT and spent two summers interning with the IBM lab. Outside of work, she’s taught chocolate sculpture to teenagers, drinks a lot of Starbucks, and has a big fluffy dog named Lincoln.
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org