Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Distributed TensorFlow with distribution strategies

Magnus Hyttsten (Google)
2:40pm3:20pm Thursday, April 18, 2019
Implementing AI
Location: Rendezvous
Secondary topics:  Deep Learning and Machine Learning tools
Average rating: *****
(5.00, 2 ratings)

Who is this presentation for?

  • Developers and software engineers

Level

Intermediate

Prerequisite knowledge

  • Experience using TensorFlow and running distributed training in any framework

What you'll learn

  • Learn how to run distributed TensorFlow on CPUs, GPUs, and TPUs with Keras and estimator APIs

Description

The TensorFlow team has altered the basic way to do distributed training on TensorFlow. Now you can use TensorFlow 2.0, which turns on eager execution by default; refactor some of the core functionality out of Estimators; and pack up algorithms for distributing computation into slottable objects called DistributionStrategies.

Magnus Hyttsten explains how to use TensorFlow effectively in a distributed manner using best practices. Magnus covers using TensorFlow’s new DistributionStrategies to get easy high-performance training with Keras models (and custom models) on multi-GPU setups as well as multinode training on clusters with accelerators, explores some of the underlying algorithms (like Allreduce), and shows how this can accelerate your training in various hardware configurations. You’ll also learn how to measure performance and ways to consistently show and reproduce performance.

Photo of Magnus Hyttsten

Magnus Hyttsten

Google

Magnus Hyttsten is a developer advocate for TensorFlow at Google, where he works on developing the TensorFlow product. A developer fanatic, Magnus is an appreciated speaker at major industry events such as Google I/O, the AI Summit, AI Conference, ODSC, GTC, QCon, and others on machine learning and mobile development. Right now, he’s focusing on reinforcement learning models and making model inference effective on mobile.