Executive Briefing: Explaining machine learning models
Who is this presentation for?
- Data scientists, engineers, research scientists, and everyone who builds, runs, or acts on machine learning models
ML methods have been causing a revolution in several fields, including science and technology, finance, healthcare, cybersecurity, etc. For instance, ML can identify objects in images, perform language translation, enable web search, perform medical diagnosis, classify fraudulent transactions—all with surprising accuracy. Unfortunately, much of this progress has come with ML models, especially ones based on deep neural networks, getting more complex and opaque. An overarching question that arises is why the model made its prediction. This question is of importance to developers in debugging (mis-)predictions, evaluators in assessing the robustness and fairness of the model, and end users in deciding whether they can trust the model.
Ankur Taly explores the problem of understanding individual predictions by attributing them to input features—a problem that’s received a lot of attention in the last couple of years. Ankur details an attribution method called integrated gradients that’s applicable to a variety of deep neural networks (object recognition, text categorization, machine translation, etc.) and is backed by an axiomatic justification, and he covers applications of the method to debug model predictions, increase model transparency, and assess model robustness. He also dives into a classic result from cooperative game theory called the Shapley values, which has recently been extensively applied to explaining predictions made by nondifferentiable models such as decision trees, random forests, gradient-boosted trees, etc. Time permitting, you’ll get a sneak peak of the Fiddler platform and how it incorporates several of these techniques to demystify models.
- A basic understanding of machine learning
What you'll learn
- Understand the risks of black box machine learning models
- Learn techniques to mitigates some of the risks
Ankur Taly is the head of data science at Fiddler, where he’s responsible for developing, productionizing, and evangelizing core explainable AI technology. Previously, he was a staff research scientist at Google Brain, where he carried out research in explainable AI, and was most well-known for his contribution to developing and applying integrated gradients— a new interpretability algorithm for deep networks. His research in this area has resulted in publications at top-tier machine learning conferences, and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). Besides explainable AI, Ankur has a broad research background and has published 25+ papers in areas including computer security, programming languages, formal verification, and machine learning. He’s served on several academic conference program committees (PLDI, POST, and PLAS), delivered several invited lectures at universities and various industry venues, and instructed short courses at summer schools and conferences. Ankur earned his PhD in computer science from Stanford University and a BTech in CS from IIT Bombay.
Diversity and Inclusion Sponsor
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires