Mar 15–18, 2020

Introducing the AI Explainability 360 open source toolkit

Dennis Wei (IBM Research)
1:30pm5:00pm Monday, March 16, 2020
Location: 210 B

Level

Intermediate

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they’re affected citizens, government regulators, domain experts, or system developers, have different requirements for explanations. To address these needs, Dennis Wei introduces AI Explainability 360 (AIX360), an open source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics.

Equally important, he identifies a taxonomy to help entities requiring explanations to navigate explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other toolkit users, there’s an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. Together, the toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they’re developed.

You’ll learn to use and contribute to AIX360 directly from its creators and discover how to become a member of the community. Compared to existing open source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining usability in industry, and software engineering. By integrating these three aspects, AIX360 can attract researchers in AI explainability and help translate the collective research results for practicing data scientists and developers deploying solutions in a variety of industries.

Prerequisite knowledge

  • A basic understanding of machine learning
  • Experience using Python for conducting data science (useful but not required)

Materials or downloads needed in advance

What you'll learn

  • Understand the diverse landscape of explainability techniques and how to choose the appropriate technique for your use case
Photo of Dennis Wei

Dennis Wei

IBM Research

Dennis Wei is a research staff member with IBM Research AI. He holds a PhD degree in electrical engineering from the Massachusetts Institute of Technology (MIT). His recent research interests center around trustworthy machine learning, including explainability and interpretability, fairness, and causality.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires