Presented By O’Reilly and Intel AI
Put AI to work
8-9 Oct 2018: Training
9-11 Oct 2018: Tutorials & Conference
London, UK

Lessons learned building an open deep learning model exchange

11:55–12:35 Thursday, 11 October 2018
AI Business Summit, Implementing AI
Location: Park Suite
Secondary topics:  Deep Learning models, Platforms and infrastructure

Who is this presentation for?

  • Data scientists, machine learning engineers, and researchers

Prerequisite knowledge

  • Familiarity with deep learning (useful but not required)

What you'll learn

  • Gain insight into the challenges encountered in building out an open deep learning model exchange, the solutions to these challenges, and lessons learned along the way


The common perception of applying deep learning is that you take an open source or research model, train it on raw data, and deploy the result as a fully self-contained artifact. The reality is far more complex.

For the training phase, users face an array of challenges including handling varied deep learning frameworks, hardware requirements and configurations, not to mention code quality, consistency, and packaging. For the deployment phase, they face another set of challenges, ranging from custom requirements for data pre- and postprocessing, inconsistencies across frameworks, and lack of standardization in serving APIs.

The goal of the IBM Code Model Asset Exchange (MAX) is to remove these barriers to entry for developers to obtain, train, and deploy open source deep learning models for their business applications. In building the exchange, IBM encountered all these challenges and more.

For the training phase, IBM aims to leverage the Fabric for Deep Learning (FfDL), an open source project providing framework-independent training of deep learning models on Kubernetes. For the deployment phase, MAX provides standardized container-based, fully self-contained model artifacts encompassing the end-to-end deep learning predictive pipeline.

Nick Pentreath walks you through the process of building MAX and shares challenges and problems encountered, the solutions developed, and the lessons learned, along with best practices for cross-framework, standardized deep learning model training and deployment.

Photo of Nick Pentreath

Nick Pentreath


Nick Pentreath is a principal engineer at the Center for Open Source Data & AI Technologies (CODAIT) at IBM, where he works on machine learning. Previously, he cofounded Graphflow, a machine learning startup focused on recommendations, and was at Goldman Sachs, Cognitive Match, and Mxit. He’s a committer and PMC member of the Apache Spark project and author of Machine Learning with Spark. Nick is passionate about combining commercial focus with machine learning and cutting-edge technology to build intelligent systems that learn from data to add business value.