Presented By O’Reilly and Cloudera
Make Data Work
September 11, 2018: Training & Tutorials
September 12–13, 2018: Keynotes & Sessions
New York, NY

A high-performance system for deep learning inference and visual inspection

Moty Fania (Intel), Sergei Kom (Intel)
1:10pm–1:50pm Thursday, 09/13/2018
Data science and machine learning
Location: 1A 15/16 Level: Intermediate
Secondary topics:  Data Platforms, Deep Learning
Average rating: *****
(5.00, 1 rating)

Who is this presentation for?

  • Developers and data scientists

What you'll learn

  • Explore Intel's high-performance system for deep learning inference designed for production environments
  • Discover potential use cases that can leverage deep learning visual inference to provide meaningful insights

Description

Recent years have seen significant evolvement of deep learning and AI capabilities. AI solutions can augment or replace mundane tasks, increase workforce productivity, and relieve human bottlenecks. Unlike traditional automation, these solutions include cognitive aspects that used to require human decision making. In some cases, deep learning has proven to be even more accurate than humans in identifying patterns and therefore can be effectively used to enable various kinds of automated, real-time decision making.

The advanced analytics team at Intel IT recently implemented an internal visual inference platform—a high-performance system for deep learning inference—designed for production environments. This innovative system enables easy deployment of many DL models in production while enabling a closed feedback loop where data flows in and decisions are returned through a fast REST API. The system maximizes throughputs through batching and smart in-memory caching while maintaining its ability to support long short-term memory networks. It can be deployed either as a cluster or standalone node.

To enable stream analytics at scale, the system was built in a modern microservices architecture using cutting-edge technologies such as TensorFlow, TensorFlow Serving, Redis, Flask, and more. It’s optimized to be easily deployed with Docker and Kubernetes and cuts down time to market for deploying a DL solution. By supporting different kinds of models and various inputs, including images and video streams, this system can enable the deployment of smart visual inspection solutions with real-time decision making.

Moty Fania and Sergei Kom explain how Intel implemented the platform and share lessons learned along the way.

Topics include:

  • How Intel identified the set of characteristics and needs that are common to AI scenarios and made them available in this platform
  • Architecture and related technologies (TensorFlow Serving, Redis, Flask, etc.)
  • How Docker and Kubernetes made the on-premises deployment easy
  • Potential use cases that can leverage deep learning visual inference to provide meaningful insights
  • How the platform addresses visual inspection use cases that are essential to accelerate various product development and validation processes at Intel
Photo of Moty Fania

Moty Fania

Intel

Moty Fania is a principal engineer for big data analytics at Intel IT and the CTO of the Advanced Analytics Group, which delivers big data and AI solutions across Intel. With over 15 years of experience in analytics, data warehousing, and decision support solutions, Moty leads the development and architecture of various big data and AI initiatives, such as IoT systems, predictive engines, online inference systems, and more. Moty holds a bachelor’s degree in economics and computer science and a master’s degree in business administration from Ben-Gurion University.

Photo of Sergei Kom

Sergei Kom

Intel

Sergei Kom is a senior software engineer in Intel’s Advanced Analytics Department. Sergei has a lot of experience in developing real-time applications using Spark Streaming, Kafka, Kafka Streams, and TensorFlow Serving. He enjoys learning new technologies and implement them in new projects.