The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last few years, several formal definitions of fairness have gained prominence, such as requiring false positive rates be equal across groups. As Sharad Goel argues, nearly all of these definitions of fairness suffer from significant statistical limitations. Perversely, when used as a design constraint, they can even harm the very groups they were intended to protect.
Sharad Goel is an assistant professor in the Department of Management Science and Engineering at Stanford University and the founder and director of the Stanford Computational Policy Lab. He also holds courtesy appointments in the Computer Science and Sociology Departments and the Law School. Previously, he was a senior researcher at Yahoo and Microsoft in New York City. In his research, Sharad looks at public policy through the lens of computer science, bringing a new, computational perspective to a diverse range of contemporary social issues, including policing, incarceration, and elections. He holds a PhD in applied mathematics from Cornell University.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org