Mar 15–18, 2020

When AI goes wrong and how to fix it with real-world AI auditing

Daniel Jeffries (Pachyderm)
1:45pm2:25pm Wednesday, March 18, 2020
Location: 210 F

Who is this presentation for?

  • C-suite execs

Level

Intermediate

Description

AI and ML systems are going through a renaissance. After decades of setbacks and failures, they’re delivering big breakthroughs as algorithms, datasets and processing power caught up with ideas. Today many systems are better than humans at narrow tasks like image recognition and detecting cancers and playing Go.

But there’s something missing. Daniel Jeffries breaks down how you can make your AI and ML systems auditable and transparent right now with a few classic IT techniques your team already knows well.

AIs are prone to making strange or subtle mistakes that are easy to miss in well-trained models and even poorly trained ones. Even our superhuman AIs make mistakes that are laughable to a human being, such as a baby with baseball bat image classification, a stop sign with marks on it seen as a 45 mile-per-hour sign, a little distortion in an image leads to gibbon becoming a monkey, a sticker of a toaster fooling an image system into thinking a banana is a toaster, and some of IBM Watson’s way off-the-mark answers on Jeopardy.

But sometimes it’s even worse and systems go badly wrong, such as Google images classifying people of color as gorillas, a teacher getting fired by a poorly designed model—the model classified the teacher as 6% effective one year and 95% the next year (weapons of math destruction), Microsoft’s Tay, and the Tesla crash.

The problem is that these systems lack any higher-level reasoning or broader context. Humans are a collection of intelligent processes all working together to create a semicohesive system that’s always updating and changing and weighing new information.

Not all examples are so big and scary. Sometimes they’re subtle. They hurt you in different ways. They hurt your bottom line, losing you money or exposing you to lawsuits.

Let’s say you have a fraud detection that’s so sensitive it keeps triggering false positives. It’s annoying for customers because their cards keep getting frozen and they just leave the company. Or maybe you have a loan qualification system that keeps missing good candidates, and that means you’re leaving lots of money on the table. To make sure that AIs are doing what we want, we have to do a lot better.

The bad news is we don’t. The good news is we can.

Explainable AI would go a long way, but it’s a real-cutting edge branch of AI right now, and it’s still developing. It’s not there yet, and it won’t solve all our problems. There is no perfect panacea. These are complex problems, and they’ll always have unintended consequences we can’t foresee. These are complex systems running in a chaotic environment with infinite variables.

NASA engineers know that drawing something up on a whiteboard is a lot different than a system acting in the real world with dust and friction and solar flares. The same is true of simpler AI systems today when they interact with the real world. There will always be things we can’t predict and couldn’t plan for when we created those systems.

But we don’t need explainable AI. We can do a lot better with the tools we have in place today by bringing back the hard-earned lessons we’ve learned from decades of IT. Many of the same techniques that we use to make sure a complex web application runs smoothly can be applied to AI and ML pipelines. In fact, we have no choice.

We’ve thrown the baby out with the bathwater. With each new iteration of IT, we seem to forget the lessons of the past and reinvent the wheel.

Mainframes, the personal computing revolution, the cloud, mobile, and AI and ML all seem very different on the surface, but they all have the same patterns underneath them, and many of the same best practices carry over from one generation to the next.

When you’re trying to figure out the future, look to the past. DevOps doesn’t apply one-to-one to AI and ML. There are lots of little nuances for data science teams that need to be taken into account. People are already doing this and you should too.

Prerequisite knowledge

  • A basic understanding of AI and ML and the anomalies and the mistakes that narrow AIs can make

What you'll learn

  • Learn how to deal with AI and ML mistakes more effectively by looking to the past and drawing from best practices in IT infrastructure
Photo of Daniel Jeffries

Daniel Jeffries

Pachyderm

Dan Jeffries is the chief technology evangelist at Pachyderm. He’s also an author, engineer, futurist, pro blogger, and he’s given talks all over the world on AI and cryptographic platforms. He’s spent more than two decades in IT as a consultant and at open source pioneer Red Hat. His articles have held the number one writer’s spot on Medium for artificial intelligence, bitcoin, cryptocurrency and economics more than 25 times. His breakout AI tutorial series, “Learning AI if You Suck at Math” along with his explosive pieces on cryptocurrency, "Why Everyone Missed the Most Important Invention of the Last 500 Years” and "Why Everyone Missed the Most Mind-Blowing Feature of Cryptocurrency,” are shared hundreds of times daily all over social media and have been read by more than 5 million people worldwide.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires