October 28–31, 2019
Schedule: Accelerators sessions
Using TensorFlow with GPU, TPUs and other accelerators.
11:00am–11:40am Wednesday, October 30, 2019
Location: Grand Ballroom H

Sam Witteveen divulges tips and tricks to take advantage of tensor processing units (TPUs) in TensorFlow 2.0 and to take a current deep learning project and convert it to something that runs smoothly and quickly on cloud TPUs.
Read more.
11:50am–12:30pm Wednesday, October 30, 2019
Location: Grand Ballroom H
TensorFlow 2.0 offers high performance for deep learning inference through a simple API. Siddharth Sharma and Joohoon Lee explain how to optimize an app using TensorRT with the new Keras APIs in TensorFlow 2.0. You'll learn tips and tricks to get the highest performance possible on GPUs and see examples of debugging and profiling tools by NVIDIA and TensorFlow.
Read more.
1:40pm–2:20pm Wednesday, October 30, 2019
Location: Grand Ballroom H

Average rating:









(2.00, 1 rating)
Sudipta Sengupta dives into his experience with Amazon Elastic Inference and AWS Inferentia with TensorFlow in the AWS cloud.
Read more.
2:30pm–3:10pm Wednesday, October 30, 2019
Location: Grand Ballroom H

Average rating:









(4.00, 1 rating)
Neural networks are now shipping in consumer-facing projects. Enterprises need to train and ship them fast, and data scientists want to waste less time on endless training. Martin Gorner explains how Google's tensor processing units (TPUs) are here to help.
Read more.
4:10pm–4:50pm Wednesday, October 30, 2019
Location: Grand Ballroom H
Average rating:









(2.00, 1 rating)
Victoria Rege and David Norman dive into the software optimization for new accelerators using TensorFlow and accelerated linear algebra (XLA).
Read more.
5:00pm–5:40pm Wednesday, October 30, 2019
Location: Grand Ballroom H
Manjunath Kudlur and Andy Hock describe the software that compiles TensorFlow to the recently announced Cerebras Wafer-Scale Engine (WSE) for deep learning.
Read more.
2:30pm–3:10pm Thursday, October 31, 2019
Location: Grand Ballroom E
Pengchong Jin (Google)
Pengchong Jin walks you through a typical development workflow on GCP for training and deploying an object detector to a self-driving car. He demonstrates how to train the state-of-the-art RetinaNet model fast using Cloud TPUs and scale up the model effectively on Cloud TPU pods. Pengchong also explains how to export a Tensor-RT optimized mode on GPU for inference.
Read more.
4:10pm–4:50pm Thursday, October 31, 2019
Location: Grand Ballroom C/D
Jack Chung, Chao Liu, and Daniel Lowell explore breaking convolution algorithms into modular pieces to be better fused with graph compilers such as accelerated linear algebra (XLA).
Read more.
Presented by
Diamond Sponsor
Elite Sponsors
Gold Sponsor
Supporting Sponsors
Premier Exhibitors
Exhibitors
Innovators
Contact us
confreg@oreilly.com
For conference registration information and customer service
partners@oreilly.com
For more information on community discounts and trade opportunities with O’Reilly conferences
sponsorships@oreilly.com
For information on exhibiting or sponsoring a conference
pr@oreilly.com
For media/analyst press inquires