October 28–31, 2019

Build more inclusive TensorFlow pipelines with fairness indicators

Tulsee Doshi (Google), Christina Greer (Google)
4:10pm4:50pm Wednesday, October 30, 2019
Location: Grand Ballroom E
Average rating: *****
(5.00, 2 ratings)

Who is this presentation for?

  • Developers who work on ML

Level

Intermediate

Description

Machine learning (ML) continues to drive monumental change across products and industries. But as we expand the reach of ML to even more sectors and users, it’s ever more critical to ensure that these pipelines work well for all users.

Tulsee Doshi and Christina Greer outline their insights from their work in proactively building for fairness, using case studies built from Google products. They also explain the metrics that have been fundamental in evaluating their models at scale and the techniques that have proven valuable in driving improvements. Tulsee and Christina announce the launch of Fairness Indicators and demonstrate how the product can help with more inclusive development. Fairness Indicators is a new feature built into TensorFlow Extended (TFX) and on top of TensorFlow Model Analysis. Fairness Indicators enables developers to compute metrics that identify common fairness risks and drive improvements.

You’ll leave with an awareness of how algorithmic bias might manifest in your product, the ways you could measure and improve performance, and how Google’s Fairness Indicators can help.

Prerequisite knowledge

  • A basic understanding of TensorFlow (useful but not required)

What you'll learn

  • Learn how to tactically identify and evaluate ML fairness risks using Fairness Indicators
Photo of Tulsee Doshi

Tulsee Doshi

Google

Tulsee Doshi is the product lead for Google’s ML fairness effort, where she leads the development of Google-wide resources and best practices for developing more inclusive and diverse products. Previously, Tulsee worked on the YouTube recommendations team. She earned her BS in symbolic systems and MS in computer science from Stanford University.

Photo of Christina Greer

Christina Greer

Google

Christina Greer is a software engineer on the Google Brain team. She focuses specifically on machine learning fairness in the context of model evaluation and understanding, and scaling up solutions for ML fairness to support many teams across Google. Previously, Christina worked on building infrastructure to support diverse Google products: Google Assistant, Cloud Dataflow, and ads. Working in this area of ML fairness allows her to combine building infrastructure at Google scale with advancing efforts to avoid creating or reinforcing existing biases. Christina earned her BS in computer science from the University of Kansas.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

sponsorships@oreilly.com

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires