Put AI to work
June 26-27, 2017: Training
June 27-29, 2017: Tutorials & Conference
New York, NY

Interpretable AI: Not just for regulators

Patrick Hall (H2O.ai | George Washington University), SriSatish Ambati (H2O.ai)
4:50pm5:30pm Thursday, June 29, 2017
Impact of AI on business and society
Location: Beekman Level: Intermediate
Average rating: *****
(5.00, 1 rating)

Prerequisite Knowledge

  • A working knowledge of widely used linear modeling approaches and machine learning algorithms
  • Familiarity with popular deep learning architectures and algorithms

What you'll learn

  • Explore numerous deep learning and machine learning interpretability techniques
  • Learn a new vocabulary for describing and classifying these techniques

Description

While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a serious legal mandate in the regulated verticals of banking, insurance, and other industries. Moreover, scientists, physicians, researchers, and humans in general have the right to understand and trust the models and modeling results that affect their work and their lives. Today, many are embracing deep learning and machine learning techniques, but what happens when people want to explain these impactful, complex technologies or when these technologies inevitably make mistakes?

Patrick Hall and Sri Satish share several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results. Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human story telling: complexity, scope, understanding, and trust.

Topics include:

  • Data visualization techniques for representing high-degree interactions and nuanced data structures
  • Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry
  • Cutting-edge approaches for explaining extremely complex deep learning and machine learning models
Photo of Patrick Hall

Patrick Hall

H2O.ai | George Washington University

Patrick Hall is a senior director for data science products at H2O.ai, where he focuses mainly on model interpretability and model management. Patrick is also an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Previously, Patrick held global customer-facing and R&D research roles at SAS Institute. He holds multiple patents in automated market segmentation using clustering and deep neural networks. Patrick is the eleventh person worldwide to become a Cloudera Certified Data Scientist. He studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.

Photo of SriSatish Ambati

SriSatish Ambati

H2O.ai

SriSatish Ambati is the cofounder and CEO of H2O.ai, makers of H2O, the leading open source machine learning platform, and Driverless AI, which speeds up data science workflows by automating feature engineering, model tuning, ensembling, and model deployment. Sri is known for envisioning killer apps in fast-evolving spaces and assembling stellar teams towards productizing that vision. A regular speaker on the big data, NoSQL and Java circuit, Sri leaves a trail @srisatish.