Machine learning models are increasingly used to inform high-stakes decisions. Discrimination by machine learning becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias. The comprehensive open-source AI Fairness 360 (AIF360) Python toolkit contains 73 fairness metrics and 10 bias mitigation algorithms developed by the broader algorithmic fairness research community. They can all be called in a standard way, very similar to scikit-learn’s fit/predict paradigm. By capturing existing metrics and mitigation algorithms created by the research community in one extensible toolkit, AIF360 makes it easier for all practitioners interested in AI fairness to work together to improve and apply technical approaches in the future.
Learn to use and contribute to AIF360 directly from its creators and become a member of the community. Compared to existing open source efforts on AI fairness, AIF360 takes a step forward in that it focuses on bias mitigation (as well as bias checking), industrial usability, and software engineering. By integrating these three aspects, AIF360 aims to bring together researchers with an interest in AI fairness and also helps translate our collective research results to practicing data scientists, data engineers, and developers deploying solutions in a variety of industries.
Rachel Bellamy is a principal research scientist and manages the Human-AI Collaboration Group at the IBM T.J. Watson Research Center in Yorktown Heights, New York, where she leads an interdisciplinary team of human-computer interaction experts, user experience designers, and user experience engineers. Previously, she worked in the Advanced Technology Group at Apple, where she conducted research on collaborative learning and led an interdisciplinary team that worked with the San Francisco Exploratorium and schools to pioneer the design, implementation, and use of media-rich collaborative learning experiences for K–12 students. She holds many patents and has published more than 70 research papers. Rachel holds a PhD in cognitive psychology from the University of Cambridge and a BS in psychology with mathematics and computer science from the University of London.
Kush R. Varshney was born in Syracuse, NY in 1982. He received the B.S. degree (magna cum laude) in electrical and computer engineering with honors from Cornell University, Ithaca, NY, in 2004. He received the S.M. degree in 2006 and the Ph.D. degree in 2010, both in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge. While at MIT, he was a National Science Foundation Graduate Research Fellow.
Dr. Varshney is a research staff member and manager with IBM Research AI at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the Learning and Decision Making group. He is the founding co-director of the IBM Science for Social Good initiative. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to recognitions such as the 2013 Gerstner Award for Client Excellence for contributions to the WellPoint team and the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation. He conducts academic research on the theory and methods of statistical signal processing and machine learning. His work has been recognized through best paper awards at the Fusion 2009, SOLI 2013, KDD 2014, and SDM 2015 conferences.
Karthikeyan Natesan Ramamurthy is a Research Staff Member in IBM Research AI at the Thomas J. Watson Research Center, Yorktown Heights, NY. He received his PhD in Electrical Engineering from Arizona State University. His broad research interests are in understanding the geometry and topology of high-dimensional data and developing theory and methods for efficiently modeling the data. He has also been intrigued by the interplay between humans, machines, and data, and the societal implications of machine learning. His papers have won best paper awards at the 2015 IEEE International Conference on Data Science and Advanced Analytics and the 2015 SIAM International Conference on Data Mining. He is an associate editor of the Digital Signal Processing journal and a member of IEEE.
Michael Hind is a Distinguished Research Staff Member in the IBM Research AI organization in Yorktown Heights, New York. His current research passion is in the general of area of Trusted AI, focusing on the fairness, explainability, and reliability of the construction of AI systems.
Previously, he led departments of dozens of researchers focusing on programming languages, software engineering, cloud computing, and tools for cognitive systems. Michael’s team has successfully transferred technology to various parts of IBM and launched several successful open source projects. After receiving his Ph.D. from NYU in 1991, Michael spent 7 years as an assistant/associate professor of computer science at SUNY – New Paltz.
Michael is an ACM Distinguished Scientist, and a member of IBM’s Academy of Technology, a former Associate Editor of ACM TACO, has served on over 30 program committees, given talks at top universities and conferences, and co-authored over 40 publications. His 2000 paper on Adaptive Optimization was recognized as the OOPSLA’00 Most Influential Paper and his work on Jikes RVM was recognized with the SIGPLAN Software Award in 2012.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org