Presented By O’Reilly and Intel AI
Put AI to work
Sep 4-5, 2018: Training
Sep 5-7, 2018: Tutorials & Conference
San Francisco, CA

Explaining machine learning models

Armen Donigian (ZestFinance)
4:50pm-5:30pm Thursday, September 6, 2018
Implementing AI
Location: Continental 1-3
Secondary topics:  Ethics, Privacy, and Security, Health and Medicine
Average rating: ***..
(3.50, 2 ratings)

Who is this presentation for?

  • Data scientists, analysts, and other stakeholders in model deployment

What you'll learn

  • Understand why machine learning model explainability is important but can be difficult to achieve
  • Explore leading approaches to model explainability, including their strengths and limitations
  • Discover how explainability approaches can solve critical business problems such as model validation

Description

Machine learning models are often complex, with massive abstract descriptions that make the relationship between their inputs and outputs seem like a black box. A modern neural network, for example, might look at thousands of features and perform millions of additions and multiplications to produce a prediction. But how do we explain that prediction to someone else? How do we tell which features are important and why? And if we can’t understand how a model makes a prediction, do we really trust it to run our business, make medical conclusions, or make an unbiased decision about an applicant’s eligibility for a loan?

Explainability techniques clarify how models make decisions, offering answers to these questions and giving us confidence that our models are functioning properly (or not). Each of these techniques is applicable to a different set of models, makes different assumptions, and answers a slightly different question, but when used properly, they can meet business requirements and improve model performance.

Armen Donigian shares several examples of two of the main types of explainability. The first directly relates inputs to outputs, a naturally intuitive approach that includes local interpretable model-agnostic explanations (LIME), axiomatic attributions, VisualBackProp, and traditional feature contributions. The second makes use of the data the model was trained on. DeepLift, for example, can show which training examples were most relevant to a model’s decision, while scrambling and prototype methods offer overviews of the decision-making process. Along the way, Armen discusses how ZestFinance approaches explainability, offering a practical guide for your own work. While there is no perfect “silver bullet” explainability technique, understanding when and how to use these approaches enables you to explain many useful models and gives you a broad view of current explainability best practices and research.

Photo of Armen Donigian

Armen Donigian

ZestFinance

Armen Donigian is team lead for modeling tools and explainability at ZestFinance. He started his career working on outdoor navigation algorithms using Kalman filters and later transitioned to build assisted GPS point positioning solutions at NASA’s Jet Propulsion Laboratory. After the landing of the Mars Curiosity Rover, he helped build data-driven products at several startups. Armen holds undergraduate and graduate degrees in computer science from UCLA and USC.