Slice and explain: A unified paradigm for explaining ML models
Who is this presentation for?
- Data scientists, business analysts, and risk analysts
Complex ML models are rapidly spreading to high-stakes tasks such as credit scoring, underwriting, medical diagnosis, and crime prediction. While they often beat the state-of-the-art by huge margins, the black box nature of modern ML models is a major stumbling block in their adoption. Without the ability to interpret or explain predictions made by ML models, end users struggle to trust their predictions; data scientists struggle to monitor, validate, and refine models; and regulators struggle to assess compliance.
There’s a surge in techniques for explaining ML models. These techniques range from attributing individual predictions to features to distilling interpretable rules form a model to identifying data slices where the model performs poorly. Each of these techniques offers a standalone tool for understanding a particular aspect of model behavior. As such, you may have to apply multiple tools to obtain a holistic picture of how the model reasons.
Ankur Taly showcases a new paradigm for model explanations called “slice and explain” that unifies several existing explanation tools into a single framework. In a nutshell, it identifies a slice of data and obtains an explanation for how the model behaves on the slice. This allows you to answer pointed questions about the model, such as how fair is the model with respect to a certain protected group, and you can perform exploratory analysis of model behavior on various parts of a dataset. The key technical innovation of slice and explain lies in blending model explanations with traditional data drill-down analysis.
You’ll get a technical overview of the slice and explain framework developed and deployed at Fiddler Labs, along with several applications of the framework.
- A basic understanding of ML and data analysis
What you'll learn
- Understand the importance of explainability for ML models
- Learn about existing model explanation tools and how they fit into slice and explain
- Discover how users from data scientists and business users to regulators and auditors can leverage slice and explain to analyze models
Ankur Taly is the head of data science at Fiddler, where he’s responsible for developing, productionizing, and evangelizing core explainable AI technology. Previously, he was a staff research scientist at Google Brain, where he carried out research in explainable AI, and was most well-known for his contribution to developing and applying integrated gradients— a new interpretability algorithm for deep networks. His research in this area has resulted in publications at top-tier machine learning conferences, and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). Besides explainable AI, Ankur has a broad research background and has published 25+ papers in areas including computer security, programming languages, formal verification, and machine learning. He’s served on several academic conference program committees (PLDI, POST, and PLAS), delivered several invited lectures at universities and various industry venues, and instructed short courses at summer schools and conferences. Ankur earned his PhD in computer science from Stanford University and a BTech in CS from IIT Bombay.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires