A model is only as good as the underlying training data. (A model is modeling training data.) You’re very likely creating your training data from a large pool of people you’ve never met (i.e., outsourced data labeling). But when and how do their biases affect your model?
How do you know that the nuance of the decisions are OK to model? What if the model interprets the training data in a way that’s not intuitive to you? How do you monitor AI that’s making decisions in the real world? How much should you monitor? Should we regulate companies such that they must provide some level of monitoring and centralized authority/control via statistical confidence intervals of decision outcomes? What if a superimportant model, like Google’s self-driving model, kills people? Does the Supreme Court look at the training data? Do they look at decisions made in similar scenarios? Do they make Google start from scratch on a new model? How does Google prove that their model has been improved and won’t make a bad decision again?
Brian Rieger attempts to answer some of these questions as he explores AI decision making.
Brian is Rieger is cofounder and COO of Labelbox, the industry-leading training data software that is accelerating global access to artificial intelligence. An accomplished aerospace engineer, data scientist, and software developer turned serial entrepreneur, Brian began his career doing aerodynamics, testing, and flight certification of the Boeing 787 Dreamliner. He then built an aerospace company that put hardware on the International Space Station. Brian was recognized as one of Forbes’s “30 under 30” for transforming enterprise technology with machine intelligence.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com