Ryan Sepassi offers an overview of Tensor2Tensor, an open source library of datasets and models and a framework for training, evaluation, and decoding, built on top of TensorFlow. Tensor2Tensor is actively used and maintained by scientists and engineers within Google Brain.
Tensor2Tensor makes it easy to try a model on many datasets and scale up from a single GPU to an entire Cloud TPU pod (512 cores). It includes the reference implementation of the “Attention Is All You Need” transformer model as well as the prototype implementation of Mesh TensorFlow, an exciting new direction for large-scale model parallelism within TensorFlow and on TPUs.
This session is sponsored by Google.
Ryan Sepassi is a senior research engineer on the Google Brain team at Google, where he works on natural language processing and reinforcement learning research and infrastructure. He’s an author and maintainer of the Tensor2Tensor library.
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com