There is mounting evidence that the widespread deployment of machine learning and artificial intelligence in business and government applications is reproducing or even amplifying existing prejudices and social inequalities. Even when an organization or an individual software engineer seeks to maintain fairness and accuracy, it’s easy to unintentionally create software that exhibits discriminatory or privacy-violating behavior.
Aileen Nielsen demonstrates how to identify and avoid bias and other unfairness in your analyses and apply best practices when developing new software and machine learning products.
Outline:
Introduction and social relevance
Data discovery
Data processing
Modeling
Auditing your model
Research frontiers
Aileen Nielsen works at an early-stage NYC startup that has something to do with time series data and neural networks, and she’s the author of a Practical Time Series Analysis (2019) and an upcoming book, Practical Fairness, (summer 2020). Previously, Aileen worked at corporate law firms, physics research labs, a variety of NYC tech startups, the mobile health platform One Drop, and on Hillary Clinton’s presidential campaign. Aileen is the chair of the NYC Bar’s Science and Law Committee and a fellow in law and tech at ETH Zurich. Aileen is a frequent speaker at machine learning conferences on both technical and legal subjects.
Comments on this page are now closed.
For exhibition and sponsorship opportunities, email strataconf@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of Strata Data Conference contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com
Comments
can you please post the slides? Thank you
Here is the git repo: https://github.com/StrataFairnessTutorial/DemoCode
Hi, can you please provide the link to the github repo?