TensorFlow on AWS
What you'll learn, and how you can apply it
- Discover how to easily build, train, and deploy TensorFlow models on AWS
Who is this presentation for?
- You're a data scientist, ML developer, or researcher.
- A basic understanding of TensorFlow
Hardware and/or installation requirements:
- A laptop
- An AWS account
- Build, train, and deploy TensorFlow models with Amazon SageMaker
A fully managed machine learning platform, Amazon SageMaker enables developers and data scientists to build, train, and deploy custom TensorFlow models. You’ll learn how to build custom TensorFlow models in Jupyter and iterate fast in Amazon SageMaker’s local mode, pass TensorFlow scripts to Amazon SageMaker to train your model on a cluster of instances, and deploy your trained model to Amazon SageMaker hosting and seamlessly specify preprocessing and postprocessing steps.
- Reduce inference cost by up to 75% for TensorFlow models with Amazon Elastic Inference
You’ll learn how to deploy your TensorFlow 2.0 models with Amazon Elastic Inference, allowing you to attach just the right amount of GPU-powered acceleration to any Amazon SageMaker or Amazon EC2 instance and set up Elastic Inference accelerators for your TensorFlow model and measure your cost savings and performance.
- Distributed training with TensorFlow models on Amazon SageMaker and Amazon Elastic Container Service for Kubernetes (EKS)
Reducing time to train your TensorFlow models is crucial in improving your productivity and reducing your time to market. You’ll learn how to efficiently scale your training workloads to multiple instances, with Amazon SageMaker doing the heavy lifting for you, and how to set up and optimize distributed training on a Kubernetes cluster in Amazon EKS with AWS Deep Learning Containers.
- Train once and deploy everywhere on cloud and edge devices with SageMaker Neo
You’ll learn how to use Amazon SageMaker Neo Deep Learning Compiler (DLC) to compile your trained TensorFlow models and deploy them in the cloud or on edge devices using AWS IoT Greengrass, how Neo DLC optimizes the trained models by improving efficiency and reducing memory footprint of the compiled model, how Neo runtime abstracts the underlying hardware and allows running the compiled model on multiple hardware devices such as Intel Xeon and Atom, NVIDIA Jetson, ARM, and many more, and you’ll gain experience in improving runtime performance by 2x and reducing memory footprint by 10x using SageMaker Neo.
About your instructors
Shashank Prasanna is a senior AI and machine learning evangelist at Amazon Web Services, where he focuses on helping engineers, developers, and data scientists solve challenging problems with machine learning. Previously, he worked at NVIDIA, MathWorks (makers of MATLAB), and Oracle in product marketing and software development roles focused on machine learning products. Shashank holds an MS in electrical engineering from Arizona State University.
Vikrant Kahlir is a solution architect at Amazon Web Services.
Rama Thamman is an R&D manager at Amazon Web Services.
Get the Platinum pass or the Training pass to add this course to your package.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires