Unified tooling for machine learning interpretability





Who is this presentation for?
- Data scientists, machine learning engineers, and software engineers
Level
Description
In machine learning, often a trade-off must be made between accuracy and intelligibility: the most accurate models usually are not very intelligible (e.g., random forests, boosted trees, and neural nets), and the most intelligible models usually are less accurate (e.g., linear or logistic regression). Interpretability research has focused on minimizing this trade-off by developing more accurate interpretable models and by developing new techniques to explain black box models.
Harsha Nori, Sameul Jenkins, and Rich Caruana walk you through a framework for thinking about interpretability and help you choose the right interpretability method for a variety of real-world tasks. They also present new tooling from Microsoft that helps with all forms of interpretability—both training accurate, interpretable models and understanding black box models. This toolkit includes the first Python implementation of a powerful new learning algorithm developed at Microsoft: GA2M.
Prerequisite knowledge
- Experience training and evaluating machine learning systems
What you'll learn
- Discover when and how to use a variety of machine learning interpretability methods through case studies of real-world situations
- Learn how to use a new interpretability Python toolkit from Microsoft

Harsha Nori
Microsoft
Harsha Nori is a data scientist at Microsoft. He works on interpretability for machine learning.

Samuel Jenkins
Microsoft
Sameul Jenkins is a data scientist at Microsoft. He works on interpretability for machine learning.

Rich Caruana
Microsoft
Rich Caruana is a principal researcher at Microsoft Research. Previously, he was on the faculty in the Computer Science Department at Cornell University, at UCLA’s Medical School, and at Carnegie Mellon University’s Center for Learning and Discovery. Rich received an NSF CAREER Award in 2004 (for meta clustering); best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles); co-chaired KDD in 2007 (with Xindong Wu); and serves as area chair for Neural Information Processing Systems (NIPS), International Conference on Machine Learning (ICML), and KDD. His research focus is on learning for medical decision making, transparent modeling, deep learning, and computational ecology. He holds a PhD from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon. His thesis on multi-task learning helped create interest in a new subfield of machine learning called transfer learning.
Presented by
Elite Sponsors
Strategic Sponsors
Zettabyte Sponsors
Contributing Sponsors
Exabyte Sponsors
Content Sponsor
Impact Sponsors
Supporting Sponsor
Non Profit
Contact us
confreg@oreilly.com
For conference registration information and customer service
partners@oreilly.com
For more information on community discounts and trade opportunities with O’Reilly conferences
strataconf@oreilly.com
For information on exhibiting or sponsoring a conference
pr@oreilly.com
For media/analyst press inquires