Presented By
O’Reilly + Intel AI
Put AI to Work
April 15-18, 2019
New York, NY
Discover opportunities for applied AI
Organizations that successfully apply AI innovate and compete more effectively. How is AI transforming your business?
Be a part of the program—apply to speak by October 16.

Responsible AI Practices: A Technical Demonstration

Andrew Zaldivar (Google)
4:55pm5:35pm Wednesday, April 17, 2019
Implementing AI
Location: Mercury Rotunda
Secondary topics:  Deep Learning and Machine Learning tools, Ethics, Privacy, and Security

Who is this presentation for?

Data Scientist, Data Analyst, Quantitative Analyst, ML / AI Practitioners, Technical Program & Product Managers, Technologists

Level

Beginner

Prerequisite knowledge

Attendees should have basic knowledge in machine learning. This could be fulfilled throughout the myriad of free, online courses such as Google's Machine Learning Crash Course (https://developers.google.com/machine-learning/crash-course/). In addition, attendees should feel comfortable reading and writing Python code that contains basic programming constructs. Familiarity with high-level TensorFlow APIs (e.g., Estimators, Keras) is also helpful here as some of models described in the example were built in TensorFlow. While not required, prior familiarity on data science libraries (e.g., Matplotlib, Seaborn, pandas, NumPy, scikit-learn) is useful.

What you'll learn

Attendees will be up-to-date on readily available tools and techniques that help address fairness, inclusion and other ethical values in AI systems. Attendees will also have better insight on how to implement these approaches into new or existing AI systems. Lastly, attendees will have a better sense of where to go should new problem related to fairness and inclusion arise in their system.

Description

The development of AI is creating new opportunities to improve the lives of people around the world. It is also raising new questions about the best way to build fairness, interpretability, privacy, security, and other moral and ethical values into these systems.

Using Jupyter Notebook and high-level TensorFlow APIs, this presentation will share hands-on examples that highlight current work and recommended practices towards building AI systems that are fair and inclusive for all. Throughout the presentation, we will discuss how to design your model using concrete goals for fairness and inclusion, the importance of using representative datasets to train and test models, how to check a system for unfair biases, and how to analyze performance. Each of these points will be accompanied by a technical demonstration that is readily available for the attendees to try for themselves.

This is all in an effort to share knowledge, research, tools, datasets, and other resources with the larger community so that together we can evolve AI towards positive goals.

Photo of Andrew Zaldivar

Andrew Zaldivar

Google

Andrew Zaldivar is a Developer Advocate in Google’s AI group, helping to bring the benefits of AI to everyone. Andrew develops, evaluates and promotes tools and techniques that can help the larger communities build responsible AI systems.

Before joining Google AI, Andrew was a Senior Strategist in Google’s Trust & Safety group and worked on protecting the integrity of some of Google’s key products by utilizing machine learning to scale, optimize and automate abuse-fighting efforts.

Prior to joining Google, Andrew completed his Ph.D. in Cognitive Neuroscience from the University of California, Irvine and was an Insight Data Science fellow.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)