We have altered the basic way to do distributed training on TensorFlow. Changes include:
-Using TensorFlow 2.0, which turns on eager execution by default
-Refactoring some of the core functionality out of Estimators, and
-Packing up algorithms for distributing computation into slottable objects called “DistributionStrategies”.
We’ll go over basic DistributionStrategy use, explore some of the underlying algorithms (like Allreduce), and show how this can accelerate your training in various hardware configurations.
This discussion will also talk about how we measure performance, and ways we can consistently show and reproduce performance.
Magnus Hyttsten is a developer advocate for TensorFlow at Google, where he works on developing the TensorFlow product. A developer fanatic, Magnus is an appreciated speaker at major industry events such as Google I/O, the AI Summit, AI Conference, ODSC, GTC, QCon, and others on machine learning and mobile development. Right now, he’s focusing on reinforcement learning models and making model inference effective on mobile.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org