Presented By O'Reilly and Cloudera
Make Data Work
September 26–27, 2016: Training
September 27–29, 2016: Tutorials & Conference
New York, NY

Why should I trust you? Explaining the predictions of machine-learning models

Carlos Guestrin (Apple | University of Washington )
11:20am–12:00pm Wednesday, 09/28/2016
Data science & advanced analytics
Location: Hall 1C Level: Intermediate
Average rating: ****.
(4.45, 20 ratings)

Prerequisite knowledge

  • Basic experience with machine learning
  • What you'll learn

  • Get a practical industry perspective on gaining trust in machine-learning models
  • Explore a practical algorithm that can be used to bring trust to models throughout industry
  • Description

    Despite widespread adoption, machine-learning models remain mostly black boxes, making it very difficult to understand the reasons behind a prediction. Such understanding is fundamentally important to assess trust in a model before we take actions based on a prediction or choose to deploy a new ML service. Such understanding further provides insights into the model, which can be used to turn an untrustworthy model or prediction into a trustworthy one.

    Carlos Guestrin offers an overview of LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner by learning an interpretable model locally around the prediction, as well as a method to explain models by presenting representative individual predictions and their explanations in a nonredundant way.

    Carlos demonstrates the flexibility of these methods by explaining different models for text (e.g., random forests) and image classification (e.g., deep neural networks) and explores the usefulness of explanations via novel experiments, both simulated and with human subjects. These explanations empower users in various scenarios that require trust, such as deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and detecting why a classifier should not be trusted.

    Photo of Carlos Guestrin

    Carlos Guestrin

    Apple | University of Washington

    Carlos Guestrin is the director of machine learning at Apple and the Amazon Professor of Machine Learning in Computer Science and Engineering at the University of Washington. Carlos was the cofounder and CEO of Turi (formerly Dato and GraphLab), a machine-learning company acquired by Apple. A world-recognized leader in the field of machine learning, Carlos was named one of the 2008 Brilliant 10 by Popular Science. He received the 2009 IJCAI Computers and Thought Award for his contributions to artificial intelligence and a Presidential Early Career Award for Scientists and Engineers (PECASE).