Deploying end-to-end deep learning pipelines with ONNX
Who is this presentation for?
- ML engineers, production engineers, and data scientists
Level
Description
A deep learning model is often viewed as fully self-contained, freeing practitioners from the burden of data processing and feature engineering. However, in most real-world applications of AI, these models have similarly complex requirements for data preprocessing, feature extraction, and transformation as do more traditional ML models. Any nontrivial use case requires care to ensure no model skew exists between the training-time data pipeline and the inference-time data pipeline. This is not simply theoretical—small differences or errors can be difficult to detect but can have dramatic impact on the performance and efficacy of the deployed solution.
Despite this, there are currently few widely accepted, standard solutions for enabling simple deployment of end-to-end deep learning pipelines to production. Recently, ONNX has emerged for representing deep learning models in a standardized format. While this is useful for representing the core model inference phase, you need to go further to encompass deployment of the end-to-end pipeline.
Nick Pentreath explores ONNX for exporting deep learning computation graphs and the ONNX-ML extension of the core specification for exporting both “traditional” ML models and common feature extraction, data transformation, and postprocessing steps. You’ll see how to use ONNX and the growing ecosystem of exporter libraries for common frameworks (including TensorFlow, PyTorch, Keras, scikit-learn, and others) to deploy complete deep learning pipelines, as well as the gaps and missing pieces to still be addressed.
Prerequisite knowledge
- A basic knowledge of deep learning and related frameworks (useful but not required)
What you'll learn
- Learn how the open source ONNX format and surrounding ecosystem are helping to standardize deployment of end-to-end deep learning pipelines
Nick Pentreath
IBM
Nick Pentreath is a principal engineer at the Center for Open Source Data & AI Technologies (CODAIT) at IBM, where he works on machine learning. Previously, he cofounded Graphflow, a machine learning startup focused on recommendations, and was at Goldman Sachs, Cognitive Match, and Mxit. He’s a committer and PMC member of the Apache Spark project and author of Machine Learning with Spark. Nick is passionate about combining commercial focus with machine learning and cutting-edge technology to build intelligent systems that learn from data to add business value.
Presented by
Elite Sponsors
Strategic Sponsors
Zettabyte Sponsors
Contributing Sponsors
Exabyte Sponsors
Content Sponsor
Impact Sponsors
Supporting Sponsor
Non Profit
Contact us
confreg@oreilly.com
For conference registration information and customer service
partners@oreilly.com
For more information on community discounts and trade opportunities with O’Reilly conferences
strataconf@oreilly.com
For information on exhibiting or sponsoring a conference
pr@oreilly.com
For media/analyst press inquires