A practical guide toward explainability and bias evaluation in AI and machine learning
Who is this presentation for?
- Software engineers, data scientists, managers, senior technical leaders, product managers, delivery managers, machine learning engineers, and researchers
The concepts of “undesired bias” and “black box models” in machine learning have become a highly discussed topic due to the numerous high profile incidents that have been covered by the media. It’s certainly a challenging topic, as it could even be said that the concept of societal bias is inherently biased in itself, depending on an individual’s (or group’s) perspective.
Alejandro Saucedo doesn’t reinvent the wheel; he simplifies the issue of AI explainability so it can be solved using traditional methods. He covers the high-level definitions of bias in machine learning to remove ambiguity and demystifies it through a hands-on example, in which the objective is to automate the loan-approval process for a company using machine learning, which allows you to go through this challenge step by step and use key tools and techniques from the latest research together with domain expert knowledge at the right points to enable you to explain decisions and mitigate undesired bias in machine learning models.
Alejandro breaks undesired bias down into two constituent parts: a priori societal bias and a posteriori statistical bias, with tangible examples of how undesired bias is introduced in each step, and you’ll learn some very interesting research findings in this topic. Spoiler alert: Alejandro takes a pragmatic approach, showing how any nontrivial system will always have an inherent bias, so the objective is not to remove bias, but to make sure you can get as close as possible to your objectives and make sure your objectives are as close as possible to the ideal solution.
- Experience with a machine learning project
What you'll learn
- Gain an overview of the concept of bias in machine learning
- Learn the three key steps to assess bias throughout the lifecycle of a machine learning model
- Understand how key machine learning concepts, such as feature importance, class imbalance, model analysis, partial dependence, etc., are used in a practical example, as well as how these data science fundamentals can be used to interact with key domain experts
The Institute for Ethical AI & Machine Learning
Alejandro Saucedo is the chief scientist at the Institute for Ethical AI & Machine Learning, where he leads highly technical research on machine learning explainability, bias evaluation, reproducibility and responsible design. Previously, Alejandro held technical leadership positions across hypergrowth scale-ups and tech giants including Eigen Technologies, Bloomberg LP, and Hack Partners. He has a strong track record of building departments of machine learning engineers from scratch and leading the delivery of large-scale machine learning system across the financial, insurance, legal, transport, manufacturing, and construction sectors (in Europe, the US, and Latin America).
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Diversity and Inclusion Sponsor
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires