Presented By O’Reilly and Intel AI
Put AI to Work
April 29-30, 2018: Training
April 30-May 2, 2018: Tutorials & Conference
New York, NY

Racial bias in facial recognition software

Stephanie Kim (Algorithmia)
11:55am–12:35pm Wednesday, May 2, 2018
Implementing AI
Location: Nassau East/West

Who is this presentation for?

  • Data scientists, product owners, and software engineers

Prerequisite knowledge

  • A basic understanding of machine learning

What you'll learn

  • Learn how to avoid common pitfalls when building facial recognition models due to racial bias when training your model

Description

Using OpenFace as an example face recognition model, Stephanie Kim discusses the basics of facial recognition and the importance of having diverse datasets when building out a model. Along the way, she explores racial bias in datasets using real-world examples and shares a use case for developing an OpenFace model for a celebrity look-alike app—and outlines how it can fail with homogenous datasets.

Photo of Stephanie Kim

Stephanie Kim

Algorithmia

Stephanie Kim is a developer evangelist at Algorithmia, where she enjoys writing accessible documentation, tutorials, and scripts to help developers find fun and useful ways to incorporate machine learning into their smart applications. Stephanie is the founder of Seattle PyLadies and a co-organizer of the Seattle Building Intelligent Applications Meetup. She enjoys machine learning projects, particularly ones where she gets to dive into unstructured text data to discover friction points in the UI or find out what users are thinking with natural language processing techniques. Her passions include machine learning, NLP, and writing helpful and fun articles that make machine learning accessible to anyone. She has spoken at a number of conferences, including PyData and ACT-W, a women’s tech conference, where she gave a talk that was turned into a blog post.