The TensorFlow team has recently released plans for TensorFlow 2.0. 2.0 is always-eager execution, which leads to many shifts in how you should expect to use TensorFlow, especially for how you interact with graphs. TensorFlow 2.0 is more object based, both around saving and optimization. As you create subfunctions, you can apply “@defun” decorators to accelerate them. @defun uses Autograph to allow you to place loops, subfunctions, and other Python structures on accelerators. This can lead to a flexible, easy-to-use Pythonic approach to building machine learning, all with the familiar scaling, performance, and distribution that you expect from TensorFlow.
Josh Gordon shares the very latest in TensorFlow, focusing on TensorFlow 2.0 and its easy-to-use eager execution. Josh also covers how to use TensorFlow’s revised high-level API and details pitfalls and tricks to get better performance on accelerator hardware.
Josh Gordon is a Developer Advocate at Google AI, and teaches Applied Deep Learning at Columbia University, and Machine Learning at Pace University. He has over a decade of machine learning experience to share. You can find him on Twitter at
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com