Magnus Hyttsten and Priya Gupta demonstrate how to perform distributed TensorFlow training using the Keras high-level APIs. They walk you through TensorFlow’s distributed architecture, how to set up a distributed cluster using Kubeflow and Kubernetes, and how to distribute models created in Keras. Along the way, you’ll discover why TPUs and GPUs are so effective at processing machine learning workflows and learn how to configure TensorFlow to use them.
Magnus Hyttsten is a developer advocate for TensorFlow at Google, where he works on developing the TensorFlow product. A developer fanatic, Magnus is an appreciated speaker at major industry events such as Google I/O, the AI Summit, AI Conference, ODSC, GTC, QCon, and others on machine learning and mobile development. Right now, he’s focusing on reinforcement learning models and making model inference effective on mobile.
Priya Gupta is a software engineer on the TensorFlow team at Google, where she works on making it easier to run TensorFlow in a distributed environment. She’s passionate about technology and education and wants machine learning to be accessible to everyone. Previously, she worked at Coursera and on the mobile ads team at Google.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org