TensorFlow on the Cerebras Wafer-Scale Engine
Who is this presentation for?
- Machine and deep learning researchers and application developers, software engineers, TensorFlow and other framework engineers, computer hardware developers, and product and business leaders
The Cerebras WSE is the largest chip ever built and the central processor of a new computer system purpose-built to accelerate deep learning. The WSE delivers more compute, more memory, and more communication bandwidth to enable AI research at revolutionary speeds and scale.
Manjunath Kudlur and Andy Hock describe the software stack that connects users and TensorFlow to the Cerebras WSE. The Cerebras stack takes standard code from TensorFlow as input and automatically generates an optimized executable for the WSE. This gives you access to cluster-scale performance in a single node without changes to programming paradigm.
They provide an overview of the compilation process, including major components and the state of tools for the TF development community. They also describe the approach to extract parallelism from large graphs to a large chip: how the compiler makes decisions, for example, about mapping user-defined model layers to compute and memory and layer connectivity to communication across the chip.
- A working knowledge of TensorFlow and how to build and train models
- Knowledge of XLA, estimators within TensorFlow, and how to run training on other accelerators such as GPU or TPU (useful but not required)
What you'll learn
- Take a first look at Cerebras's software for its recently announced accelerator
- Gain insight to software architecture and neural network compilation for a new chip at a truly unique scale
- Hear ideas for training acceleration possibilities beyond what's achievable in existing hardware and ideas for model architectures beyond what's possible to accelerate in existing hardware
Manjunath Kudlur is the technical lead for Cerebras Systems’s compiler software project, mapping neural networks to a revolutionary new deep learning accelerator with a wafer-scale processor. He’s an engineer with expertise in compilers, machine learning, and parallel computing. Previously, he worked at Google in Brain on TensorFlow and at NVIDIA on compilers and programming languages research. Manjunath has a PhD in computer science and engineering from the University of Michigan.
Andy Hock is the director of product for Cerebras Systems, an AI hardware startup out to accelerate deep learning and change compute forever. He has 10 years of experience in product management, technical program management, and enterprise business development; over 15 years of experience in research, algorithm development, and data analysis for image processing; and 5 years’ experience in applied machine learning and AI. Previously, Andy was the product manager of data and analytics for Terra Bella at Google, where he led the development of machine learning-powered data products from satellite imagery; was senior director for advanced technology programs at Skybox Imaging (which became Terra Bella following acquisition by Google in 2014); and was a senior program manager and senior scientist at Areté. He has a PhD in geophysics and space physics from the University of California, Los Angeles, and a BA in astronomy-physics from Colgate University.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires