Presented By
O’Reilly + Cloudera
Make Data Work
March 25-28, 2019
San Francisco, CA

Is your AI making good decisions?

Brian Rieger (Labelbox)
11:45am12:00pm Tuesday, March 26, 2019
Location: 2024
Average rating: ****.
(4.67, 3 ratings)

A model is only as good as the underlying training data. (A model is modeling training data.) You’re very likely creating your training data from a large pool of people you’ve never met (i.e., outsourced data labeling). But when and how do their biases affect your model?

How do you know that the nuance of the decisions are OK to model? What if the model interprets the training data in a way that’s not intuitive to you? How do you monitor AI that’s making decisions in the real world? How much should you monitor? Should we regulate companies such that they must provide some level of monitoring and centralized authority/control via statistical confidence intervals of decision outcomes? What if a superimportant model, like Google’s self-driving model, kills people? Does the Supreme Court look at the training data? Do they look at decisions made in similar scenarios? Do they make Google start from scratch on a new model? How does Google prove that their model has been improved and won’t make a bad decision again?

Brian Rieger attempts to answer some of these questions as he explores AI decision making.

Photo of Brian Rieger

Brian Rieger

Labelbox

Brian is Rieger is cofounder and COO of Labelbox, the industry-leading training data software that is accelerating global access to artificial intelligence. An accomplished aerospace engineer, data scientist, and software developer turned serial entrepreneur, Brian began his career doing aerodynamics, testing, and flight certification of the Boeing 787 Dreamliner. He then built an aerospace company that put hardware on the International Space Station. Brian was recognized as one of Forbes’s “30 under 30” for transforming enterprise technology with machine intelligence.