Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Random search and reproducibility for neural architecture search

Ameet Talwalkar (Carnegie Mellon University | Determined AI)
1:00pm1:40pm Wednesday, April 17, 2019
Machine Learning, Models and Methods
Location: Grand Ballroom West
Secondary topics:  Automation in machine learning and AI, Deep Learning and Machine Learning tools, Models and Methods
Average rating: *****
(5.00, 1 rating)

Who is this presentation for?

  • Machine learning engineers and data scientists

Level

Intermediate

Prerequisite knowledge

  • A basic understanding of machine learning and deep learning

What you'll learn

  • Explore new NAS baselines that build off the following observations: NAS is a specialized hyperparameter optimization problem, and random search is a competitive baseline for hyperparameter optimization

Description

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. Ameet Talwalkar shares work that aims to help ground the empirical results in this field and proposes new NAS baselines that build off the following observations: NAS is a specialized hyperparameter optimization problem, and random search is a competitive baseline for hyperparameter optimization.

Leveraging these observations, Ameet evaluates both random search with early-stopping and a novel random search with a weight-sharing algorithm on two standard NAS benchmarks: PTB and CIFAR-10. Results show that random search with early-stopping is a competitive NAS baseline that performs at least as well as ENAS, a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10.

Ameet concludes by exploring existing reproducibility issues for published NAS results, noting the lack of source material needed to exactly reproduce these results, and discussing the robustness of published results given the various sources of variability in NAS experimental setups.

All information (code, random seeds, documentation) needed to exactly reproduce our results will be shared, along with random search with weight-sharing results for each benchmark on two independent experimental runs.

Photo of Ameet Talwalkar

Ameet Talwalkar

Carnegie Mellon University | Determined AI

Ameet Talwalkar is cofounder and chief scientist at Determined AI and an assistant professor in the School of Computer Science at Carnegie Mellon University. His research addresses scalability and ease-of-use issues in the field of statistical machine learning, with applications in computational genomics. Ameet led the initial development of the MLlib project in Apache Spark. He is the coauthor of the graduate-level textbook Foundations of Machine Learning (MIT Press) and teaches an award-winning MOOC on edX, Distributed Machine Learning with Apache Spark.