The development of AI is creating new opportunities to improve the lives of people around the world. It’s also raising new questions about the best way to build fairness, interpretability, privacy, security, and other moral and ethical values into these systems.
Using the Jupyter Notebook and high-level TensorFlow APIs, Andrew Zaldivar shares hands-on examples that highlight current work and recommended practices toward building AI systems that are fair and inclusive for all. You’ll learn how to design your model using concrete goals for fairness and inclusion, the importance of using representative datasets to train and test models, how to check a system for unfair biases, and how to analyze performance. Each of these points will be accompanied by a technical demonstration that is readily available for you to try for yourself.
Andrew Zaldivar is a senior developer advocate for Google AI. His job is to help to bring the benefits of AI to everyone. Andrew develops, evaluates, and promotes tools and techniques that can help communities build responsible AI systems, writing posts for the Google Developers blog, and speaking at a variety of conferences. Previously, Andrew was a senior strategist in Google’s Trust and Safety Group and worked on protecting the integrity of some of Google’s key products by using machine learning to scale, optimizing, and automating abuse-fighting efforts. Andrew holds a PhD in cognitive neuroscience from the University of California, Irvine and was an Insight Data Science fellow.
For exhibition and sponsorship opportunities, email aisponsorships@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of AI contacts
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com