Faster inference in TensorFlow 2.0 with TensorRT
Who is this presentation for?
- Developers and data scientists who are building deep learning applications in TensorFlow
TensorFlow 2.0 is tightly integrated with TensorFlow and offers high performance for deep learning inference through a simple API. Siddharth Sharma and Joohoon Lee use examples to show you how to optimize an app using TensorRT with the new Keras APIs in TensorFlow 2.0. They show you tips and tricks to get the highest performance possible on GPUs and detail examples of how to debug and profile apps using tools by NVIDIA and TensorFlow. You’ll walk away with an overview and resources to get started, and if you’re already familiar with TensorFlow, you’ll get tips on how to get the most out of your application.
- Experience using deep learning in TensorFlow
What you'll learn
- Discover the latest and greatest in the integrated solution, workflows and tools for profiling, and tips and tricks to squeeze the most out of your inference solution
Siddharth Sharma is a senior technical marketing manager for accelerated computing at NVIDIA. Previously, Siddharth was a product marketing manager for Simulink and Stateflow at MathWorks, working closely with automotive and aerospace companies to adopt model-based designs for creating control software.
Joohoon Lee is a principal product manager for AI inference software at NVIDIA. Previously, he led the automotive deep learning software solutions team focusing on the production deployment of neural networks in DRIVE AGX platform using TensorRT. His expertise includes quantization, sparsity optimization, compilers, GPU, and AI accelerator architecture design. Joohoon received his BS and MS in electrical and computer engineering from Carnegie Mellon University.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires