In recent years, deep learning has become the go-to solution for a broad range of applications, often outperforming other state-of-the-art technologies. Deep learning is amazing, but it sometimes fails miserably, even for very simple, practical problems. Shaked Shammah discusses four types of simple problems for which the gradient-based algorithms commonly used in deep learning either fail or suffer from significant difficulties. Shaked illustrates the failures through practical experiments, provides theoretical insights explaining their source, and explores how they might be remedied. Some can be solved by using specific approaches to network architecture and loss functions. For others, deep learning is simply not the right way to go.
Shaked Shammah is a graduate student at the Hebrew University, where he works under Shai Shalev-Shwartz, and a researcher at Mobileye Research. Shaked’s work focuses on general machine learning and optimization, specifically the theory and practice of deep learning and reinforcement learning.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com