Presented By O’Reilly and Intel Nervana
Put AI to work
September 17-18, 2017: Training
September 18-20, 2017: Tutorials & Conference
San Francisco, CA

Scalable deep learning

Ameet Talwalkar (Determined AI)
4:00pm–4:40pm Wednesday, September 20, 2017
Location: Imperial A
Average rating: ****.
(4.50, 2 ratings)

What you'll learn

  • Explore Hyperband, a novel algorithm for hyperparameter optimization, and Paleo, which can quickly and accurately model the expected scalability and performance of putative parallel and distributed deep learning systems

Description

Although deep learning is highly acclaimed, fundamental bottlenecks exist when attempting to develop deep learning applications at scale. One involves exploring the design space of a model family, which typically requires training tens to thousands of models with different hyperparameters. Model training itself is a second major bottleneck, as classical learning algorithms are often infeasible for the petabyte-sized datasets that are fast becoming the norm.

Ameet Talwalkar offers an overview of Hyperband, a novel algorithm for hyperparameter optimization that is simple, flexible, theoretically sound, and an order of magnitude faster than leading competitors, and shares research aimed at understanding the underlying landscape of training deep learning models in parallel and distributed environments—an analytical performance model called Paleo, which can quickly and accurately model the expected scalability and performance of putative parallel and distributed deep learning systems.

Photo of Ameet Talwalkar

Ameet Talwalkar

Determined AI

Ameet Talwalkar is cofounder and chief scientist at Determined AI and an assistant professor in the School of Computer Science at Carnegie Mellon University. Ameet led the initial development of the MLlib project in Apache Spark. He is the coauthor of the graduate-level textbook Foundations of Machine Learning (MIT Press) and teaches an award-winning MOOC on edX, Distributed Machine Learning with Apache Spark.