The art of explainability: Removing the bias from AI
Who is this presentation for?
- AI strategy leads, AI implementers, data scientists, business analysts, and product analysts
With the rise of AI, computers can take on more tasks usually done by humans with an increase in efficiency and productivity. From manufacturing to finance, industries are realizing the importance of AI and are exploring how best to adopt AI into their work streams. However, the inhibiting factor is a lack of trust in AI. There have been several cases of AI deployments rolled back due to negative publicity related to bias and trustworthiness issues. Recognizing these risks, governments are introducing regulations to help consumers understand AI-made decisions. Enterprises need an explainable approach to AI—an approach that ensures they can better manage the business risks associated with deploying AI in use cases from underwriting loans and fraud detection to automated diagnostics and content moderation.
Krishna Gade outlines how “explainable AI” fills a critical gap in operationalizing AI, including explaining ML-flagged fraud transactions, policy underwriting decisions, loan denial by ML models, and business intelligence like customer churn, regional marketing campaigns, and more. Adopting an explainable approach to AI and integrating it into the end-to-end ML workflow from training to production offers benefits such as the early identification of biased data and better confidence in model outputs.
- A basic understanding of AI and how to implement AI in typical use cases
What you'll learn
- Discover which questions to ask about AI systems to better manage bias
- Learn why AI makes certain predictions and the factors behind those decisions, how to anticipate customer concerns and how best to answer them, and the importance of explainability in AI
Krishna is the cofounder and CEO of Fiddler Labs, an enterprise startup building an explainable AI engine to address problems regarding bias, fairness, and transparency in AI. Previously, he led the team that built Facebook’s explainability feature “Why am I seeing this?” He’s an entrepreneur with a technical background with experience creating scalable platforms and expertise in converting data into intelligence. Having held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft, he’s seen the effects that bias has on AI and machine learning decision-making processes. With Fiddler, his goal is to enable enterprises across the globe solve this problem.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires