Running TensorFlow at scale on GPUs (sponsored by NVIDIA)
Neil Truong, Kari Briski, and Khoa Ho walk you through their experience running TensorFlow at scale on GPU clusters like the DGX SuperPod and the Summit supercomputer. They explore the design of these large-scale GPU systems and detail how to run TensorFlow at scale using BERT and AI plus HPC applications as examples.
This session is sponsored by NVIDIA.
What you'll learn
- Learn how NVIDIA ran TensorFlow at scale
Neil Truong is a senior field application engineer at NVIDIA with expertise in system management and hardware architecture focused on GPU DL and ML application. He’s supporting the Google platform team to deploy the next-generation GPU hardware and software. He has experience in system on a chip (SoC) and system-level testing process and has managed GPU system design from concept to production.
Kari Briski US
Kari has been in the hardware and software solution industry for almost 20 years now, spending the last 3 years at NVidia in the data center and deep learning software group creating computing products that help people achieve their life’s work.
Khoa Ho is a solutions architect at NVIDIA, working on natural language processing (NLP) applications and general deep learning at scale. He regularly runs and troubleshoots multinode DL workloads on both cloud and on-premises GPU clusters.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires