Machine learning models are often complex, with massive abstract descriptions that make the relationship between their inputs and outputs seem like a black box. A modern neural network, for example, might look at thousands of features and perform millions of additions and multiplications to produce a prediction. But how do we explain that prediction to someone else? How do we tell which features are important and why? And if we can’t understand how a model makes a prediction, do we really trust it to run our business, make medical conclusions, or make an unbiased decision about an applicant’s eligibility for a loan?
Explainability techniques clarify how models make decisions, offering answers to these questions and giving us confidence that our models are functioning properly (or not). Each of these techniques is applicable to a different set of models, makes different assumptions, and answers a slightly different question, but when used properly, they can meet business requirements and improve model performance.
Armen Donigian shares several examples of two of the main types of explainability. The first directly relates inputs to outputs, a naturally intuitive approach that includes local interpretable model-agnostic explanations (LIME), axiomatic attributions, VisualBackProp, and traditional feature contributions. The second makes use of the data the model was trained on. DeepLift, for example, can show which training examples were most relevant to a model’s decision, while scrambling and prototype methods offer overviews of the decision-making process. Along the way, Armen discusses how ZestFinance approaches explainability, offering a practical guide for your own work. While there is no perfect “silver bullet” explainability technique, understanding when and how to use these approaches enables you to explain many useful models and gives you a broad view of current explainability best practices and research.
Armen Donigian is team lead for modeling tools and explainability at ZestFinance. He started his career working on outdoor navigation algorithms using Kalman filters and later transitioned to build assisted GPS point positioning solutions at NASA’s Jet Propulsion Laboratory. After the landing of the Mars Curiosity Rover, he helped build data-driven products at several startups. Armen holds undergraduate and graduate degrees in computer science from UCLA and USC.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com