Advanced model deployments with TensorFlow Serving
Who is this presentation for?
- Machine learning engineers, DevOps engineers, and data scientists interested in deploying machine learning models
TensorFlow Serving is one of the cornerstones in the TensorFlow ecosystem. It has eased the deployment of machine learning models tremendously and led to an acceleration of model deployments. Unfortunately, machine learning engineers aren’t familiar with the details of TensorFlow Serving, and they’re missing out on significant performance increases.
Hannes Hapke provides a brief introduction to TensorFlow Serving, then leads a deep dive into advanced settings and use cases. He introduces advanced concepts and implementation suggestions to increase the performance of the TensorFlow Serving setup, which includes an introduction to how clients can request model meta-information from the model server, an overview of model optimization options for optimal prediction throughput, an introduction to batching requests to improve the throughput performance, an example implementation to support model A/B testing, and an overview of monitoring your TensorFlow Serving setup.
- A basic understanding of Docker functionality and how HTTP requests work
- General knowledge of machine learning (useful but not required)
What you'll learn
- Learn how to increase the TensorFlow Serving inference performance, increase the inference response time by tweaking the request payload, and run TensorFlow Serving with NVIDIA's TensorRT for further performance improvements
- Discover how to configure batch requests in TensorFlow Serving and how to configure TensorFlow Serving to provide A/B Testing capabilities
Hannes Hapke is a machine learning enthusiast and a Google Developer Expert for machine learning. He’s applied deep learning to a variety of computer vision and natural language problems, but his main interest is in machine learning infrastructure and automating model workflows. Hannes is a coauthor of Natural Language Processing in Action and is working on Building Machine Learning Pipelines with TensorFlow for O’Reilly. When he isn’t working on a deep learning project, you’ll find him running long distances, hiking, or enjoying a book with a good cup of coffee.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires