Sep 9–12, 2019

Executive Briefing: Explaining machine learning models

Ankur Taly (Fiddler)
2:35pm3:15pm Thursday, September 12, 2019
Location: LL21 A/B
Secondary topics:  Ethics, Security, and Privacy
Average rating: *****
(5.00, 4 ratings)

Who is this presentation for?

  • Data scientists, engineers, research scientists, and everyone who builds, runs, or acts on machine learning models

Level

Intermediate

Description

ML methods have been causing a revolution in several fields, including science and technology, finance, healthcare, cybersecurity, etc. For instance, ML can identify objects in images, perform language translation, enable web search, perform medical diagnosis, classify fraudulent transactions—all with surprising accuracy. Unfortunately, much of this progress has come with ML models, especially ones based on deep neural networks, getting more complex and opaque. An overarching question that arises is why the model made its prediction. This question is of importance to developers in debugging (mis-)predictions, evaluators in assessing the robustness and fairness of the model, and end users in deciding whether they can trust the model.

Ankur Taly explores the problem of understanding individual predictions by attributing them to input features—a problem that’s received a lot of attention in the last couple of years. Ankur details an attribution method called integrated gradients that’s applicable to a variety of deep neural networks (object recognition, text categorization, machine translation, etc.) and is backed by an axiomatic justification, and he covers applications of the method to debug model predictions, increase model transparency, and assess model robustness. He also dives into a classic result from cooperative game theory called the Shapley values, which has recently been extensively applied to explaining predictions made by nondifferentiable models such as decision trees, random forests, gradient-boosted trees, etc. Time permitting, you’ll get a sneak peak of the Fiddler platform and how it incorporates several of these techniques to demystify models.

Prerequisite knowledge

  • A basic understanding of machine learning

What you'll learn

  • Understand the risks of black box machine learning models
  • Learn techniques to mitigates some of the risks
Photo of Ankur Taly

Ankur Taly

Fiddler

Ankur Taly is the head of data science at Fiddler, where he’s responsible for developing, productionizing, and evangelizing core explainable AI technology. Previously, he was a staff research scientist at Google Brain, where he carried out research in explainable AI and was most well-known for his contribution to developing and applying integrated gradients— a new interpretability algorithm for deep networks. His research in this area has resulted in publications at top-tier machine learning conferences and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). Besides explainable AI, Ankur has a broad research background and has published 25+ papers in areas including computer security, programming languages, formal verification, and machine learning. He’s served on several academic conference program committees (PLDI, POST, and PLAS), delivered several invited lectures at universities and various industry venues, and instructed short courses at summer schools and conferences. Ankur earned his PhD in computer science from Stanford University and a BTech in CS from IIT Bombay.

  • Intel AI
  • O'Reilly
  • Amazon Web Services
  • IBM Watson
  • Dataiku
  • Dell Technologies
  • Intuit
  • Gamalon
  • H2O.ai
  • Hewlett Packard Enterprise
  • MapR Technologies
  • Sisu Data
  • Intuit

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires