Greater interpretability is crucial to greater adoption of applied AI, yet today’s most popular approaches to building AI models don’t allow for this. Explainability of intelligent systems has run the gamut from traditional expert systems, which are totally explainable but inflexible and hard to use, to deep neural networks, which are effective but virtually impossible to see inside. Developing trust between consumers of AI applications and the algorithms that power them will require the ability to understand how intelligent systems reach conclusions.
Mark Hammond explores the latest techniques and cutting-edge research currently underway to build explainability into AI models. Mark dives into two approaches—learning deep explanations and model induction—and discusses the effectiveness of each in explaining classification tasks. Mark then explains how a third category—learning more interpretable models with recomposability—uses building blocks to build explainability into control tasks. To keep it fun and engaging, Mark then demonstrates these approaches by more effectively solving the game Lunar Lander in the Bonsai platform.
Mark Hammond is cofounder and CEO at Bonsai. Mark has a deep passion for understanding how the mind works and has been thinking about AI throughout his career. He has held positions at Microsoft and numerous startups and in academia, including turns at Numenta and in the Yale Neuroscience Department. He holds a degree in computation and neural systems from Caltech.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org