Explainable AI: Your model is only as good as your explanation
Who is this presentation for?Data scientists or analysts
Rapid adoption of artificial intelligence (AI) in applications across a variety of industries has led to many new challenges. In a world in which machine learning (ML) models make complex decisions of the utmost significance—even operating autonomously—you need to interpret, understand, and explain the reasoning behind ML-assisted predictions. You’ll learn what steps you can take to build fairness, accountability, and transparency into your models to instill confidence and trust with end users in your results. And you have to avoid the harmful consequences of unintended bias in our algorithms.
In recent years, the field of XAI has gained traction, driven by growing recognition by companies of the critical importance of explaining the reasoning behind every ML-assisted decision in terms that humans can understand and detecting undesirable ML defects before such systems are deployed.
Talia Tron and Joy Rimchala delve into the latest XAI developments and techniques, current state-of-the-art interpretability approaches, advantages and drawbacks of black box versus intelligible (glass boxes) models, and concept-based diagnostics.
You’ll discover how Intuit applies design thinking principles to its development processes to build interpretability and transparency into its ML models, thereby building trust and confidence with users of its financial software and services products: QuickBooks, TurboTax, and Mint. Design thinking is a methodology for creative problem solving developed at the Stanford d.school and is used by world-class design firms like IDEO and many of the world’s leading brands like Apple, Google, Samsung, and GE.
- Familiarity with fundamental data science concepts and terminology
What you'll learn
- Discover ways to integrate interpretability and explainability into existing ML and constructing intelligible models
- Learn how to evaluate and choose the explanations most suitable for the use cases
Talia Tron is a senior data scientist on the ML technologies futures team at Intuit, where she leads the effort on explainable AI. She worked on the security risk and fraud team, where she used ML and AI solutions to detect threats and frauds on Intuit’s products. She’s the leader of Intuit’s innovation catalyst local community, pushing toward customer obsession and design thinking across the Israeli site. Previously, she was data scientist in Microsoft’s advanced threat analytics groups (ATA R&D) and developed customized elearning tools in the Microsoft Education Group, and she cofounded the interdisciplinary psychiatry group, which brings together clinicians, neuroscientists, and data scientists to advance brain-related psychiatric evaluation and treatment. Talia holds a PhD in computational neuroscience from the Hebrew University, where she developed automatic tools for analyzing facial expressions and motor behavior in schizophrenia. She conducted research in collaboration with the Sheba Medical Center Innovation Center—using ML to explore and predict treatment outcomes and develop medical decision support systems.
Joy Rimchala is a data scientist in Intuit’s Machine Learning Futures Group working on ML problems in limited-label data settings. Joy holds a PhD from MIT, where she spent five years doing biological object tracking experiments and modeling them using Markov decision processes.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires