Mar 15–18, 2020

Practical methods to enable continuous delivery and sustainability for AI

Moty Fania (Intel)
11:00am11:40am Wednesday, March 18, 2020
Location: LL21 E/F

Who is this presentation for?

Data engineers, data architects, developers

Level

Intermediate

Description

For few years now, the advanced analytics (AA) team at Intel virtually doubles the number of ML and AI deployments year over year and growing its strategic business impact. Clearly, reducing the time it takes to create a solid ML model is important for enabling this acceleration; however, it’s not enough by itself. It’s equally important to find ways to rapidly switch from ML models, which are a result of data scientist exploratory work, to an operational solution in production which closes the feedback loop and adds real value.

Moty Fania explains the concepts of continuous delivery and sustainability for AI, shows the technologies Intel uses to enable this for its AI deployments, offers a detailed overview of some of the company’s AI platforms and their architectures by explaining how they support and enable these concepts.

Intel has found that one of the most important ML engineering practices is the enablement of continuous delivery and continuous deployment of AI models. This means that unlike tradition, everything is done without any hand offs. A data scientist can push the model (which is code)—while complying to some standards—and the rest happens automagically. The model is automatically built, deployed to CI, goes through full set of automated tests, and, if successful, it gets deployed and activated in an AI platform that already has all the integration hooks to the specific business domain. It means moving from intervals of releases that could take weeks to virtually push-a-button releases. No QA teams in the middle; no administrators.

This approach of continuous delivery to designated AI platforms offers a good separation of concerns as data scientists don’t have to care about many engineering aspects that aren’t part of their expertise. This also means less code overall, better predictability thanks to the full automation, and more opportunities for code reuse. On top of that, you get cool manageability aspects that help track and maintain the model in production to reduce its total cost of ownership. Things like applicative monitoring, built-in health checks, system tests, training, and retraining of models are all taken care by the platform. Sustainability of models in production is becoming increasingly significant at scale. ML models degrade over time and require maintenance or their benefits diminish and even cause damage. Without sustainability measures in place, the need to support AI solutions that were deployed in the past increasingly distracts and diverts resources from working on new problems and projects.

Intel has implemented several AI platforms on top of which all of the company’s ML models and AI services are all delivered, deployed, and sustained in a managed way. These AI platforms are built based on modern microservices architecture and message bus backbone. They still employ open source technologies such as Spark, TensorFlow, TensorFlow Serving, Ray, Snorkel, and Redis Python Kafka streams, but were optimized to be easily deployed with Docker and Kubernetes. For cloud deployments Intel implemented serverless (FaaS) architecture that offers similar capabilities.

This approach, platforms, and related advanced analytic capabilities have generated value of over 1 billion USD of increased revenue, improved product performance, and cost savings in the past five years.

What you'll learn

  • Discover how to enable concepts of continuous delivery and sustainability for AI
  • Learn what technologies Intel uses to enable this process
  • Identify the concept of AI platforms for deploying ML models and their characteristics
  • See real examples with a thorough overview of the architecture Intel implemented and related technologies
Photo of Moty Fania

Moty Fania

Intel

Moty Fania is a principal engineer and the CTO of the Advanced Analytics Group at Intel, which delivers AI and big data solutions across Intel. Moty has rich experience in ML engineering, analytics, data warehousing, and decision-support solutions. He led the architecture work and development of various AI and big data initiatives such as IoT systems, predictive engines, online inference systems, and more.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires