Sep 23–26, 2019

Deploying End-to-End Deep Learning Pipelines with ONNX

5:25pm6:05pm Wednesday, September 25, 2019
Location: 1A 06/07
Secondary topics:  Deep Learning, Model Development, Governance, Operations

Who is this presentation for?

ML engineers, production engineers, data scientists

Level

Intermediate

Description

A deep learning model is often viewed as fully self-contained, freeing practitioners from the burden of data processing and feature engineering. However, in most real-world applications of AI, these models have similarly complex requirements for data pre-processing, feature extraction and transformation as more traditional ML models.

Any non-trivial use case requires care to ensure no model skew exists between the training-time data pipeline and the inference-time data pipeline. This is not simply theoretical – small differences or errors can be difficult to detect but can have dramatic impact on the performance and efficacy of the deployed solution.

Despite this, there are currently few widely accepted, standard solutions for enabling simple deployment of end-to-end deep learning pipelines to production. Recently, ONNX has emerged for representing deep learning models in a standardized format. While this is useful for representing the core model inference phase, we need to go further to encompass deployment of the end-to-end pipeline.

In this talk I will introduce ONNX for exporting deep learning computation graphs and the ONNX-ML extension of the core specification, for exporting both “traditional” ML models as well as common feature extraction, data transformation and post-processing steps. I will cover how to use ONNX and the growing ecosystem of exporter libraries for common frameworks (including TensorFlow, PyTorch, Keras, scikit-learn and others) to deploy complete deep learning pipelines, as well as the gaps and missing pieces to be taken into account and still to be addressed.

Prerequisite knowledge

Basic knowledge of deep learning & related frameworks would be useful.

What you'll learn

The "last mile" of deep learning deployment is often overlooked but is among the most critical aspects of real-world systems. Learn how the open-source ONNX format and surrounding ecosystem are solving this challenge, enabling standardized deployment of end-to-end deep learning pipelines.
Photo of Nick Pentreath

Nick Pentreath

IBM

Nick Pentreath is a principal engineer in IBM’s Center for Open Source Data & AI Technologies (CODAIT), where he works on machine learning. Previously, he cofounded Graphflow, a machine learning startup focused on recommendations. He has also worked at Goldman Sachs, Cognitive Match, and Mxit. He is a committer and PMC member of the Apache Spark project and author of Machine Learning with Spark. Nick is passionate about combining commercial focus with machine learning and cutting-edge technology to build intelligent systems that learn from data to add business value.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

strataconf@oreilly.com

For information on exhibiting or sponsoring a conference

Contact list

View a complete list of Strata Data Conference contacts