Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Executive Briefing: Fear and loathing in explainability and transparency—A savage journey to the heart of AI

Jana Eggers (Nara Logics)
1:50pm2:30pm Thursday, April 18, 2019
Average rating: ****.
(4.67, 3 ratings)

Who is this presentation for?

  • Product managers, engineers, and executives

Level

Beginner

What you'll learn

  • Learn how to approach setting achievable goals for explainability and how to discuss transparency that also respects your IP

Description

Explainability and transparency are often presented as goals like “Don’t be evil.” But really, who wants to be confusing and obfuscating? The challenge is that these goals are often harder to define and deliver when dealing with technology that has black-box tendencies like AI or with data so big our brains can’t see the traps hidden inside.

Jana Eggers explores explainability and transparency as both required and unachievable goals for AI. This talk is designed to give you and your teams the tools to approach the question of transparency and explainability with your use case, data, and algorithm in mind, so you can feel confident that you will build trust with your users and satisfy regulatory concerns.

Photo of Jana Eggers

Jana Eggers

Nara Logics

Jana Eggers is CEO of Nara Logics, a neuroscience-inspired artificial intelligence company providing a platform for recommendations and decision support. A math and computer nerd who took the business path, Jana has had a career that’s taken her from a three-person business to fifty-thousand-plus-person enterprises. She opened the European logistics software offices as part of American Airlines, dove into the internet in ’96 at Lycos, founded Intuit’s corporate Innovation Lab, helped define mass customization at Spreadshirt, and researched conducting polymers at Los Alamos National Laboratory. Her passions are working with teams to define and deliver products customers love, algorithms and their intelligence, and inspiring teams to do more than they thought possible.

Comments on this page are now closed.

Comments

Peter BELIZAN |
12/30/2018 3:36am EST

I have a concern: Artificial Moral Agents.

Thanks.