AI transparency: A brief overview of frameworks for transparent reporting of AI provenance, usage, and fairness-informed evaluation
AI is increasingly used to perform tasks that have serious impacts on people’s lives. As such, there’s a growing need to clarify intended use cases and minimize usage in contexts for which an AI system is not well suited. In an effort to encourage responsible transparent and accountable practices, Andrew Zaldivar details some of the existing frameworks technologists can use for ethical decision making in AI.
Andrew Zaldivar is a senior developer advocate for Google AI. His job is to help to bring the benefits of AI to everyone. Andrew develops, evaluates, and promotes tools and techniques that can help communities build responsible AI systems, writing posts for the Google Developers blog, and speaking at a variety of conferences. Previously, Andrew was a senior strategist in Google’s Trust and Safety Group and worked on protecting the integrity of some of Google’s key products by using machine learning to scale, optimizing, and automating abuse-fighting efforts. Andrew holds a PhD in cognitive neuroscience from the University of California, Irvine and was an Insight Data Science fellow.
Diversity and Inclusion Sponsor
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires