Presented By O'Reilly and Cloudera
Make Data Work
September 25–26, 2017: Training
September 26–28, 2017: Tutorials & Conference
New York, NY

Failures of gradient-based deep learning

Shaked Shammah (Hebrew University)
Artificial Intelligence
Location: 1A 06/07 Level: Intermediate
Secondary topics:  Deep learning
Average rating: ****.
(4.00, 1 rating)

In recent years, deep learning has become the go-to solution for a broad range of applications, often outperforming other state-of-the-art technologies. Deep learning is amazing, but it sometimes fails miserably, even for very simple, practical problems. Shaked Shammah discusses four types of simple problems for which the gradient-based algorithms commonly used in deep learning either fail or suffer from significant difficulties. Shaked illustrates the failures through practical experiments, provides theoretical insights explaining their source, and explores how they might be remedied. Some can be solved by using specific approaches to network architecture and loss functions. For others, deep learning is simply not the right way to go.

Photo of Shaked Shammah

Shaked Shammah

Hebrew University

Shaked Shammah is a graduate student at the Hebrew University, where he works under Shai Shalev-Shwartz, and a researcher at Mobileye Research. Shaked’s work focuses on general machine learning and optimization, specifically the theory and practice of deep learning and reinforcement learning.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)