Performant, scalable models in TensorFlow 2.0 with tf.data, tf.function, and tf.distribute
Who is this presentation for?
- Anyone who needs a lot of compute for their ML projects
TensorFlow’s tf.distribute library helps you scale your model from a single GPU to multiple GPUs and to multiple machines using simple APIs that require very few changes to your existing code.
Join Taylor Robie and Priya Gupta to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You’ll learn tools and tips to get the best scaling for your training in TensorFlow.
- Familiarity with TensorFlow
What you'll learn
- Learn how to distribute TensorFlow using best practices in 2.0 on a variety of equipment
Taylor Robie is software engineer at Google, where he’s a member of the TensorFlow high-level APIs team focusing on performance with a particular emphasis on out-of-the-box performance of Keras. Previously, he was a maintainer of the TensorFlow official models repository and optimized several of the Google MLPerf submissions.
Priya Gupta is a software engineer on the TensorFlow team at Google, where she works on making it easier to run TensorFlow in a distributed environment. She’s passionate about technology and education and wants machine learning to be accessible to everyone. Previously, she worked at Coursera and on the mobile ads team at Google.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires