Fueling innovative software
July 15-18, 2019
Portland, OR

Toward a de facto standard in AI: What’s new in TensorFlow 2.0 (sponsored by IBM)

Romeo Kienzler (IBM Center for Open Source Data and AI Technologies)
2:35pm3:15pm Wednesday, July 17, 2019
Sponsored
Location: F151
Average rating: ****.
(4.80, 5 ratings)

Toward the end of 2015, Google released TensorFlow, which started out as just another numerical library, but has grown to become a de facto standard in AI technologies. TensorFlow’s initial release was widely hyped, in no small part because it was released by Google, but the project also received complaints on usability—particularly, the fact that debugging was only possible after construction of the static execution graph.

In addition, neural networks needed to be expressed as a set of linear algebra operations, which was considered too low level by many practitioners. PyTorch and Keras addressed many of the flaws in TensorFlow and gained a lot of ground. The recent TensorFlow 2.0 has successfully addressed those complaints and promises to become the go-to framework for many AI problems.

Romeo Kienzler introduces you to the most prominent changes in TensorFlow 2.0 and explains how you can use these new features successfully in your projects, covering eager execution, parallelization strategies, the advantages of the tight high-level Keras integration, live neural network training monitoring using TensorBoard, automated hyperparameter optimization, model serving with TensorFlow service, TensorFlow.js, and TensorFlow Lite. He then shares an outlook on TFX—where Google is planning to open source its complete AI pipeline—and contrasts it with existing de facto standard frameworks like Apache Spark.

This session is sponsored by IBM.

What you'll learn

  • Learn the most prominent changes in TensorFlow 2.0 and how to use them
Photo of Romeo Kienzler

Romeo Kienzler

IBM Center for Open Source Data and AI Technologies

Romeo Kienzler is chief data scientist at the IBM Center for Open Source Data and AI Technologies (CODAIT) in San Francisco, owning the strategy lead for AI model training, and he’s a member of the IBM technical expert council and the IBM academy of technology—IBM’s leading brain trusts. He’s an associate professor for artificial intelligence at the Swiss University of Applied Sciences Berne. His current research focus is on cloud-scale machine learning and deep learning using open source technologies including TensorFlow, Keras, DeepLearning4J, Apache SystemML, and the Apache Spark stack. He’s the lead instructor of the advance data science specialization on Coursera with courses on scalable data science, advanced machine learning, signal processing, and applied AI with deep learning. He contributes to various open source projects and regularly speaks at international conferences, including significant publications in the area of data mining, machine learning, and blockchain technologies. His latest book Mastering Apache Spark V2.X has been translated into Chinese. He earned an MSc (ETH) in computer science with specialization in information systems, bioinformatics, and applied statistics from the Swiss Federal Institute of Technology Zurich.