Introducing the AI Explainability 360 open source toolkit
As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they’re affected citizens, government regulators, domain experts, or system developers, have different requirements for explanations. To address these needs, Dennis Wei introduces AI Explainability 360 (AIX360), an open source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics.
Equally important, he identifies a taxonomy to help entities requiring explanations to navigate explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other toolkit users, there’s an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. Together, the toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they’re developed.
You’ll learn to use and contribute to AIX360 directly from its creators and discover how to become a member of the community. Compared to existing open source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining usability in industry, and software engineering. By integrating these three aspects, AIX360 can attract researchers in AI explainability and help translate the collective research results for practicing data scientists and developers deploying solutions in a variety of industries.
- A basic understanding of machine learning
- Experience using Python for conducting data science (useful but not required)
Materials or downloads needed in advance
- A laptop (Complete Python installation.)
What you'll learn
- Understand the diverse landscape of explainability techniques and how to choose the appropriate technique for your use case
Dennis Wei is a research staff member with IBM Research AI. He holds a PhD degree in electrical engineering from the Massachusetts Institute of Technology (MIT). His recent research interests center around trustworthy machine learning, including explainability and interpretability, fairness, and causality.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires