Presented By O’Reilly and Intel Nervana
Put AI to work
September 17-18, 2017: Training
September 18-20, 2017: Tutorials & Conference
San Francisco, CA

Scalable operationalization of trained CNTK and TensorFlow DNNs

Mary Wahl (Microsoft Corporation)
4:50pm–5:30pm Tuesday, September 19, 2017
Implementing AI
Location: Imperial B Level: Intermediate
Secondary topics:  Technical best practices, Tools and frameworks
Average rating: ****.
(4.00, 1 rating)

Prerequisite Knowledge

  • Familiarity with deep learning frameworks and distributed systems

What you'll learn

  • Learn how to efficiently distribute DNN evaluation tasks across worker nodes in a Hadoop ecosystem cluster
  • Understand common pitfalls to avoid when changing input data loading and preprocessing methods between model training and deployment


Deep neural networks (DNNs) are extraordinarily versatile artificial intelligence models that require substantial computing resources in both training and deployment. By operationalizing trained DNNs on a cloud-based Hadoop ecosystem, data engineers can dynamically scale cluster size to achieve and maintain desired evaluation throughput rates for changing workloads.

Using a classification of aerial images use case, Mary Wahl demonstrates how DNNs created in popular deep learning frameworks, such as Microsoft’s Cognitive Toolkit (CNTK) and Google’s TensorFlow—can be deployed on Microsoft HDInsight Spark clusters to efficiently partition evaluation tasks across worker nodes and minimize data transfer latency from HDFS (Azure Data Lake Store).

Most deep learning frameworks offer built-in minibatching functionality, including associated methods for data deserialization and preprocessing. A user would be remiss not to take advantage of these efficient functions during training, but their requirements (loading input data from disk, proprietary file formatting) may be unacceptable when applying a trained model to new data. For example, web services or worker nodes on Hadoop ecosystem clusters should process input data directly without writing to disk. Users may therefore need to recreate for deployment the loading and preprocessing steps that their deep learning framework’s built-in methods performed during training. Mary covers the most insidious and common errors she has encountered with that process.

Topics include:

  • Training and validation set creation from public data (the US National Agriculture Imagery Program and National Land Cover Database)
  • Creation of DNNs with from pretrained AlexNet and ResNet models using transfer learning
  • Evaluation of the trained DNNs with a validation image set using PySpark
  • Use of the validated models to study patterns of recent urban development
Photo of Mary Wahl

Mary Wahl

Microsoft Corporation

Mary Wahl is a member of Microsoft’s Boston-based algorithms and data science team, which develops custom machine learning solutions for enterprise customers. Previously, Mary studied recent human migration, disease risk estimation, and forensic reidentification using crowdsourced genomic and genealogical data at the Whitehead Institute at Columbia University under Yaniv Erlich.