While machine learning can be quite powerful, ML models can contain problematic biases in many forms that may reinforce or magnify societal unfairness and inequality. It’s important that when developers use pretrained models in their applications, they’re aware of what biases are embedded in their data and models and how unfairness might manifest in the development or use of those applications.
Hallie Benjamin offers an introduction to the emerging field of machine learning fairness, explains how it’s relevant to the developer community, and shares resources for learning more.
Hallie Benjamin is a senior strategist on the ethical ML team at Google, where she helps teams design and build products that work for everyone. She is also the cofounder of f[AI]r startups, a nonprofit launched out of the 2018 Assembly program at the Harvard Berkman Klein Center and MIT Media Lab, which empowers the startup ecosystem to responsibly and ethically build advanced technologies. Previously, Hallie was a principal with Accenture’s Technology Labs. Originally from Toronto, she holds an MA in economics and international relations from the University of Toronto and a BA in economics and political science from McGill University.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org