Presented By O'Reilly and Cloudera
Make Data Work
September 25–26, 2017: Training
September 26–28, 2017: Tutorials & Conference
New York, NY

Interpretable AI: Not just for regulators

Patrick Hall ( | George Washington University), SriSatish Ambati (
4:35pm5:15pm Wednesday, September 27, 2017
Law, ethics, governance, Machine Learning & Data Science
Location: 1A 06/07 Level: Intermediate
Average rating: *****
(5.00, 1 rating)

Who is this presentation for?

  • Researchers, scientists, data analysts, predictive modelers, and other practitioners who use deep learning and machine learning techniques
  • Business analysts who would like to use these techniques or consume the results of such techniques and want more insight into how these types of models make decisions and predictions

Prerequisite knowledge

  • A working knowledge of widely used linear modeling approaches and machine learning algorithms
  • Familiarity with popular deep learning architectures and algorithms

What you'll learn

  • Explore numerous deep learning and machine learning interpretability techniques
  • Learn a new vocabulary for describing and classifying these techniques


While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a serious legal mandate in the regulated verticals of banking, insurance, and other industries. Moreover, scientists, physicians, researchers, and humans in general have the right to understand and trust the models and modeling results that affect their work and their lives. Today, many are embracing deep learning and machine learning techniques, but what happens when people want to explain these impactful, complex technologies or when these technologies inevitably make mistakes?

Patrick Hall and Sri Satish share several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results. Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human story telling: complexity, scope, understanding, and trust.

Topics include:

  • Data visualization techniques for representing high-degree interactions and nuanced data structures
  • Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry
  • Cutting-edge approaches for explaining extremely complex deep learning and machine learning models

For more information, see Patrick and Sri’s recent article “Ideas on interpreting machine learning” on O’Reilly Ideas.

Photo of Patrick Hall

Patrick Hall | George Washington University

Patrick Hall is a senior director for data science products at, where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Previously, Patrick held global customer-facing and R&D research roles at SAS Institute. He holds multiple patents in automated market segmentation using clustering and deep neural networks. Patrick is the 11th person worldwide to become a Cloudera Certified Data Scientist. He studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.

Photo of SriSatish Ambati

SriSatish Ambati

SriSatish Ambati is the cofounder and CEO of, makers of H2O, the leading open source machine learning platform, and Driverless AI, which speeds up data science workflows by automating feature engineering, model tuning, ensembling, and model deployment. Sri is known for envisioning killer apps in fast-evolving spaces and assembling stellar teams towards productizing that vision. A regular speaker on the big data, NoSQL and Java circuit, Sri leaves a trail @srisatish.