AI transparency: A brief overview of frameworks for transparent reporting of AI provenance, usage, and fairness-informed evaluation





AI is increasingly used to perform tasks that have serious impacts on people’s lives. As such, there’s a growing need to clarify intended use cases and minimize usage in contexts for which an AI system is not well suited. In an effort to encourage responsible transparent and accountable practices, Andrew Zaldivar details some of the existing frameworks technologists can use for ethical decision making in AI.

Andrew Zaldivar
Andrew Zaldivar is a senior developer advocate for Google AI. His job is to help to bring the benefits of AI to everyone. Andrew develops, evaluates, and promotes tools and techniques that can help communities build responsible AI systems, writing posts for the Google Developers blog, and speaking at a variety of conferences. Previously, Andrew was a senior strategist in Google’s Trust and Safety Group and worked on protecting the integrity of some of Google’s key products by using machine learning to scale, optimizing, and automating abuse-fighting efforts. Andrew holds a PhD in cognitive neuroscience from the University of California, Irvine and was an Insight Data Science fellow.
Presented by
Elite Sponsors
Strategic Sponsors
Diversity and Inclusion Sponsor
Impact Sponsors
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
Contact us
confreg@oreilly.com
For conference registration information and customer service
partners@oreilly.com
For more information on community discounts and trade opportunities with O’Reilly conferences
Become a sponsor
For information on exhibiting or sponsoring a conference
pr@oreilly.com
For media/analyst press inquires