TensorFlow and its community provide a variety of deep learning tools to develop novel deep learning models. A large number of talks have focused on amazing tools like TensorBoard or novel TensorFlow implementations like the support for sequence-to-sequence networks. However, developing deep learning models with TensorFlow is often only half of the story. To be useful to the public, the model needs to be deployed.
Hannes Hapke explains how to deploy your TensorFlow model easily with TensorFlow Serving, introduces an emerging project called Kubeflow, and highlights some deployment pitfalls like model versioning and deployment flow. You’ll learn when a deployment with TensorFlow Serving or Kubeflow makes sense, how to deploy trained TensorFlow models with TensorFlow Serving, how to install required system dependencies, and Kubeflow basic concepts. You’ll leave ready to deploy your TensorFlow models yourself or guide your DevOps colleagues to deploy your models for your organization.
Hannes Hapke is the VP of Engineering and AI at Caravel, a conversational AI start-up for digital retail. He has been a Machine Learning enthusiast for many years and is a Google Developer Expert for Machine Learning. Hannes has applied deep learning to a variety of computer vision and natural language problems, but his main interest is in Machine Learning Infrastructure and automating Model Workflows. Hannes is a coauthor of the deep learning publication Natural Language Processing in Action and he is currently working on the O’Reilly book about TensorFlow Extended “Building and Managing Machine Learning Workflows”. When he isn’t working on a deep learning project, you’ll find him outdoors running, hiking, or enjoying a good cup of coffee with a great book.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com