We have recently released our TensorFlow 2.0 plans; by March we expect to have quite a bit of new material around 2.0.
2.0 is always-Eager execution, which leads to many shifts in how you should expect to use TensorFlow, and especially how you interact with graphs. TensorFlow 2.0 is more object-based, both around saving, but also around optimization. As you create subfunctions, you can apply “@defun” decorators to accelerate them. @defun uses Autograph (see https://medium.com/tensorflow/autograph-converts-python-into-tensorflow-graphs-b2a871f87ec7) to allow you to place loops, subfuctions, and other Python structures on accelerators. This can lead to a flexible, easy-to-use, Pythonic approach to building machine learning, all with the familiar scaling, performance, and distribution that you expect from TensorFlow.
Josh Gordon is a developer advocate for TensorFlow at Google. He’s passionate about machine learning and computer science education. In his free time, Josh loves biking, running, and exploring the great outdoors.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com