Mar 15–18, 2020

FairML from theory to practice: Lessons drawn from our journey to build a fair product

Divya Sivasankaran (integrate.ai)
11:00am11:40am Tuesday, March 17, 2020
Location: 210 F

Who is this presentation for?

  • Data scientists, product managers, heads of data science, and ethics officers

Level

Beginner

Description

Building fairness into any system (ML or otherwise) is hard. There are no universally agreed-upon philosophies on what it means to be just or fair—is it equal opportunity, demographic parity, or something else? How can you make sure that machine learning models in production are just and fair? And are the people in your company the right people to decide what’s fair? Divya Sivasankaran starts with two simple proposals that are both wrong, but teach you a lot in their wrongness.

If business users were presented with a fairness dial that they could turn up and down and evaluate against any other business metrics (e.g., revenue), this could result in the consideration of fairness and bias as a zero-sum game, where some other key metric has to be sacrificed. If data scientists could ping an API to compute the scores for their models over ~30 definitions of fairness, this could creates a sense of “being fair” as long as one of them shows a good result. Metrics are only helpful if you understand what it really means when they move up and down.

It’s difficult to build fairness into products when there are competing incentives such as optimizing for revenue or conversion. In an effort to build fairness capabilities into products that are used by different industries (eg., telco, banking, retail), integrate.ai has encountered these and more questions. The company strongly believes that the larger goal shouldn’t be to build fairer products, but to build products that drive better decision making by clarifying problems and opportunities. Join in to understand integrate.ai’s biggest challenges and insights drawn from its experience.

If you already know the importance of attending to bias and fairness in machine learning, Divya helps you see how to turn ideas and good intentions into action. If you’re a data scientist, bring along a business partner to the talk (and vice versa) to learn how you can work together to bring fair solutions into production.

What you'll learn

  • Learn how to bring fairness and bias work from theory to practice
  • Identify some of the common challenges (and the approaches to tackle them) in bringing FairML capabilities to production
Photo of Divya Sivasankaran

Divya Sivasankaran

integrate.ai

Divya Sivasankaran is a machine learning scientist at integrate.ai where she focuses on building out FairML capabilities within its products. Previously, she worked for a startup that partnered with government organizations (police force and healthcare) to build AI capabilities to bring about positive change (and good intentions). But these experiences also shaped her thinking around the larger ethical implications of AI in the wild and the need for ethical considerations to be brought forward at the design thinking stages (proactive versus reactive).

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires