Put AI to Work
April 15-18, 2019
New York, NY

Responsible AI practices: A technical demonstration

Andrew Zaldivar (Google)
4:55pm5:35pm Wednesday, April 17, 2019
Implementing AI
Location: Rendezvous
Secondary topics:  Deep Learning and Machine Learning tools, Ethics, Privacy, and Security

Who is this presentation for?

  • Data scientists, data analysts, quantitative analysts, ML/AI practitioners, technical program and product managers, and technologists

Level

Beginner

Prerequisite knowledge

  • A basic understanding of machine learning
  • A working knowledge of Python (e.g., writing code that contains basic programming constructs)
  • Familiarity with high-level TensorFlow APIs (e.g., estimators, Keras)
  • Experience with data science libraries, such as Matplotlib, seaborn, pandas, NumPy, and scikit-learn (useful but not required)

What you'll learn

  • Explore readily available tools and techniques that help address fairness, inclusion, and other ethical values in AI systems
  • Learn how to implement these approaches into new or existing AI systems
  • Understand where to go should new problems related to fairness and inclusion arise in your system

Description

The development of AI is creating new opportunities to improve the lives of people around the world. It’s also raising new questions about the best way to build fairness, interpretability, privacy, security, and other moral and ethical values into these systems.

Using the Jupyter Notebook and high-level TensorFlow APIs, Andrew Zaldivar shares hands-on examples that highlight current work and recommended practices toward building AI systems that are fair and inclusive for all. You’ll learn how to design your model using concrete goals for fairness and inclusion, the importance of using representative datasets to train and test models, how to check a system for unfair biases, and how to analyze performance. Each of these points will be accompanied by a technical demonstration that is readily available for you to try for yourself.

Photo of Andrew Zaldivar

Andrew Zaldivar

Google

Andrew Zaldivar is a developer advocate in the AI Group at Google, where he’s helping to bring the benefits of AI to everyone by developing, evaluating, and promoting tools and techniques that can help the larger communities build responsible AI systems. Previously, Andrew was a senior strategist in Google’s Trust and Safety Group and worked on protecting the integrity of some of Google’s key products by utilizing machine learning to scale, optimize, and automate abuse-fighting efforts. He holds a PhD in cognitive neuroscience from the University of California, Irvine, and was an Insight Data Science fellow.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)