Presented By O’Reilly and Cloudera
Make Data Work
September 11, 2018: Training & Tutorials
September 12–13, 2018: Keynotes & Sessions
New York, NY

Practical techniques for interpreting machine learning models

Patrick Hall (bnh.ai | H2O.ai), Avni Wadhwa (H20.ai), Mark Chan (H2O.ai)
9:00am–12:30pm Tuesday, 09/11/2018
Location: 1A 23/24 Level: Intermediate
Secondary topics:  Ethics and Privacy, Health and Medicine
Average rating: ****.
(4.50, 4 ratings)

Who is this presentation for?

  • Researchers, scientists, data analysts, predictive modelers, business users and other professionals, and anyone else who uses or consumes machine learning techniques

Prerequisite knowledge

  • A working knowledge of Python, widely used linear modeling approaches, and machine learning algorithms

Materials or downloads needed in advance

  • A laptop with a recent version of the Firefox or Chrome browser installed (This tutorial will use a QwikLabs environment.)

What you'll learn

  • Understand several practical machine learning interpretability techniques and how to use them with Python
  • Learn the best way to use these techniques and common pitfalls to avoid when applying them

Description

Transparency, auditability, and stability of predictive models and results are typically key differentiators in effective machine learning applications. Patrick Hall, Avni Wadhwa, and Mark Chan share tips and techniques learned through implementing interpretable machine learning solutions in industries like financial services, telecom, and health insurance. Using a set of publicly available and highly annotated examples, Patrick, Avni, and Mark teach several holistic approaches to interpretable machine learning. The examples use the well-known University of California Irvine (UCI) credit card dataset and popular open source packages to train constrained, interpretable machine learning models and visualize, explain, and test more complex machine learning models in the context of an example credit-risk application. Along the way, Patrick, Avni, and Mark draw on their applied experience to highlight crucial success factors and common pitfalls not typically discussed in blog posts and open source software documentation, such as the importance of both local and global interpretability and the approximate nature of nearly all machine learning interpretability techniques.

Outline:

Enhancing transparency in machine learning models with Python and XGBoost (example Jupyter notebook)

  • Use monotonicity constraints to train an explainable—and potentially regulator-approvable—gradient boosting machine (GBM) credit risk model
  • Use partial dependence plots and individual conditional expectation (ICE) plots to investigate the global and local mechanisms of the monotonic GBM and verify its monotonic behavior
  • Use Shapley explanations to derive reason codes for model predictions.

Increasing transparency and accountability in your machine learning project with Python (example Jupyter notebook)

  • Train a decision tree surrogate model on the original inputs and predictions of a complex GBM credit risk model to create an overall, approximate flowchart of the complex model’s predictions
  • Compare the global variable importance from the GBM and from the surrogate decision tree and the interactions displayed in the decision tree with human domain expertise and reasonable expectations
  • Use a variant of the leave-one-covariate-out (LOCO) technique to calculate the local contribution each input variable makes toward each model prediction, to enhance local understanding of the complex GBM’s behavior and the accountability of its predictions
  • Rank local contributions to generate regulator-mandated reason codes that describe, in plain English, the GBM’s decision process for every prediction

Explaining your predictive models to business stakeholders with local interpretable model-agnostic explanations (LIME) using Python and H2O (example Jupyter Notebook)

  • Explore a straightforward method of creating local samples for LIME that can be more appropriate for real-time scoring of new data in production applications
  • Use LIME to understand local trends in the complex model’s predictions and calculate the local contribution of each input variable toward each model prediction
  • Sort these contributions to create reason codes (i.e., regulator-mandated, plain English explanations of every model prediction)
  • Validate LIME results to enhance trust in generated explanations using the local model’s R2 statistic and a ranked predictions plot

Testing machine learning models for accuracy, trustworthiness, and stability with Python and H2O (example Jupyter notebook)

  • Explore sensitivity analysis—perhaps the most important validation technique for increasing trust in machine learning model predictions, because machine learning model predictions can vary drastically for small changes in input variable values, especially outside of training input domains
  • Debug a trained GBM credit risk model using residual analysis to find problems arising from overfitting and outliers
Photo of Patrick Hall

Patrick Hall

bnh.ai | H2O.ai

Patrick Hall is principle scientist at bnh.ai, a boutique law firm focused on AI and analytics; a senior director of product at H2O.ai, a leading Silicon Valley machine learning software company; and a lecturer in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning.

At both bnh.ai and H2O.ai, he works to mitigate AI risks and advance the responsible practice of machine learning. Previously, Patrick held global customer-facing and R&D research roles at SAS. He holds multiple patents in automated market segmentation using clustering and deep neural networks. Patrick is the 11th person worldwide to become a Cloudera Certified Data Scientist. He studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.

Photo of Avni  Wadhwa

Avni Wadhwa

H20.ai

Avni Wadhwa is an analytics and marketing hacker at H2O.ai, where she does a mix of marketing and sales engineering. She holds a BS in management science from the University of California, San Diego.

Photo of Mark Chan

Mark Chan

H2O.ai

Mark Chan is a hacker and data scientist at H2O.ai. Previously, he was a quantitative research developer at Thomson Reuters and Nipun Capital and a data scientist at an IoT startup, where he built a web-based machine learning platform and developed predictive models. Mark holds an MS in financial engineering from UCLA and a BS in computer engineering from the University of Illinois Urbana-Champaign. In his spare time, Mark likes competing on Kaggle and cycling.

Comments on this page are now closed.

Comments

Sonia Zapien | MARKETING SPECIALIST
07/25/2018 2:58pm EDT

Hi Juan – Either a Gold or Silver pass would be sufficient to attend this talk.

Juan Hernandez | STAFF MACHINE LEARNING ENGINEER
07/25/2018 2:53pm EDT

Which pass would I need to attend this talk?