Presented By O’Reilly and Cloudera
Make Data Work
21–22 May 2018: Training
22–24 May 2018: Tutorials & Conference
London, UK

A high-performance system for deep learning inference and visual inspection

Moty Fania (Intel)
12:0512:45 Thursday, 24 May 2018
Secondary topics:  Data Platforms, Managing and Deploying Machine Learning

Who is this presentation for?

  • Developers and data scientists

What you'll learn

  • Explore Intel's high-performance system for deep learning inference designed for production environments
  • Discover potential use cases that can leverage deep learning visual inference to provide meaningful insights

Description

Recent years have seen significant evolvement of deep learning and AI capabilities. AI solutions can augment or replace mundane tasks, increase workforce productivity, and relieve human bottlenecks. Unlike traditional automation, these solutions include cognitive aspects that used to require human decision making. In some cases, deep learning has proven to be even more accurate than humans in identifying patterns and therefore can be effectively used to enable various kinds of automated, real-time decision making.

The advanced analytics team at Intel IT recently implemented an internal visual inference platform—a high-performance system for deep learning inference—designed for production environments. This innovative system enables easy deployment of many DL models in production while enabling a closed feedback loop were data flows in and decisions are returned through a fast REST API. The system maximizes throughputs through batching and smart in-memory caching and can be deployed as either a cluster or standalone node.

Moty Fania explains how Intel implemented the platform and shares lessons learned along the way. To enable stream analytics at scale, the system was built in a modern microservices architecture using cutting-edge technologies, such as TensorFlow, TensorFlow Serving, Redis, Flask, and more. It is optimized to be easily deployed with Docker and Kubernetes and cuts down time to market for deploying a DL solution. By supporting different kinds of models and various inputs, including images and video streams, this system can enable deployment of smart visual inspection solutions with real-time decision making.

Topics include:

  • How Intel identified the set of characteristics and needs that are common to AI scenarios and made them available in this platform
  • Architecture and related technologies (TensorFlow Serving, Redis, Flask, etc.)
  • How Docker and Kubernetes made the on-premises deployment easy
  • Potential use cases that can leverage deep learning visual inference to provide meaningful insights
  • How the platform addresses visual inspection use cases that are essential to accelerate various product development and validation processes at Intel
Photo of Moty Fania

Moty Fania

Intel

Moty Fania is a principal engineer and the CTO of the Advanced Analytics Group at Intel, which delivers AI and big data solutions across Intel. Moty has rich experience in ML engineering, analytics, data warehousing, and decision-support solutions. He led the architecture work and development of various AI and big data initiatives such as IoT systems, predictive engines, online inference systems, and more.