Presented By O’Reilly and Cloudera
Make Data Work
21–22 May 2018: Training
22–24 May 2018: Tutorials & Conference
London, UK

A high-performance system for deep learning inference and visual inspection

Moty Fania (Intel)
14:0514:45 Thursday, 24 May 2018
Secondary topics:  Data Platforms, Managing and Deploying Machine Learning

Who is this presentation for?

  • Developers and data scientists

What you'll learn

  • Explore Intel's a high-performance system for deep learning inference designed for production environments
  • Discover potential use cases that can leverage deep learning visual inference to provide meaningful insights

Description

Recent years have seen significant evolvement of deep learning and AI capabilities. AI solutions can augment or replace mundane tasks, increase workforce productivity, and relieve human bottlenecks. Unlike traditional automation, these solutions include cognitive aspects that used to require human decision making. In some cases, deep learning has proven to be even more accurate than humans in identifying patterns and therefore can be effectively used to enable various kinds of automated, real-time decision making.

The advanced analytics team at Intel IT recently implemented an internal visual inference platform—a high-performance system for deep learning inference—designed for production environments. This innovative system enables easy deployment of many DL models in production while enabling a closed feedback loop were data flows in and decisions are returned through a fast REST API. The system maximizes throughputs through batching and smart in-memory caching and can be deployed either as a cluster or standalone node. Moty Fania explains how Intel implemented the platform and shares lessons learned along the way.

To enable stream analytics at scale, the system was built in a modern microservices architecture using cutting-edge technologies such as TensorFlow, TensorFlow serving, Redis, Flask and more. It is optimized to be easily deployed with Docker and Kubernetes and cuts down time to market for deploying a DL solution. By supporting different kinds of models and various inputs, including images and video streams, this system can enable deployment of smart visual inspection solutions with real-time decision making.

Topics include:

  • How Intel identified the set of characteristics and needs that are common to AI scenarios and made them available in this platform
  • Architecture and related technologies (TensorFlow serving, Redis, Flask, etc.)
  • How Docker and Kubernetes made the on-premises deployment easy
  • Potential use cases that can leverage deep learning visual inference to provide meaningful insights
  • How the platform addresses visual inspection use cases that are essential to accelerate various product development and validation processes at Intel
Photo of Moty Fania

Moty Fania

Intel

Moty Fania is a principal engineer for big data analytics at Intel IT and the CTO of the Advanced Analytics Group, which delivers big data and AI solutions across Intel. With over 15 years of experience in analytics, data warehousing, and decision support solutions, Moty leads the development and architecture of various big data and AI initiatives, such as IoT systems, predictive engines, online inference systems, and more. Moty holds a bachelor’s degree in economics and computer science and a master’s degree in business administration from Ben-Gurion University.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)