Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect.
In a recent paper, Lydia Liu and her collaborators explore how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. They demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time and may in fact cause harm in cases where an unconstrained objective would not. They completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, they find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably.
Lydia explains how these results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
Lydia T. Liu is a PhD student in computer science at the University of California, Berkeley, where she is advised by Moritz Hardt and Michael I. Jordan. She is affiliated with RISELab and BAIR. Her research interest is designing machine learning algorithms that have reliable and robust performance guarantees and positive long-term societal impact.
For exhibition and sponsorship opportunities, email aisponsorships@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of AI contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com