Presented By O’Reilly and Cloudera
Make Data Work
March 5–6, 2018: Training
March 6–8, 2018: Tutorials & Conference
San Jose, CA

Machine-learned model quality monitoring in fast data and streaming applications

Emre Velipasaoglu (Lightbend)
1:50pm2:30pm Thursday, March 8, 2018
Average rating: ****.
(4.00, 1 rating)

Who is this presentation for?

  • Data scientists, machine learning engineers and developers, and engineering leaders and architects

Prerequisite knowledge

  • Basic familiarity with machine learning (classification, regression, clustering, etc.) and statistical testing concepts

What you'll learn

  • Explore the available machine-learned model quality monitoring methods

Description

Most machine learning algorithms are designed to work with stationary data. These algorithms are usually the first ones tried by teams building machine learning applications, because they are readily available in popular open source libraries, such as Python scikit-learn and distributed machine learning libraries like Spark MLlib. Yet real-life streaming data is rarely stationary, and its statistical characteristics—as well as quality and relevance of models that depend on it—change over time. Machine-learned models built on data observed within a fixed time period usually suffer loss of prediction quality due to what is known as concept drift.

There are several methods to deal with concept drift. The most common method is periodically retraining the models with new data while perhaps down-weighting the old data or completely removing it. The length of the period is usually determined based on cost of retraining. The changes in the input data and the quality of predictions are not monitored, and the cost of inaccurate predictions is not included in these calculations. An alternative on the other end of the complexity spectrum is using adaptive learning methods. However, these algorithms still require tuning of parameters to perform well.

An attractive alternative in between is monitoring the machine-learned model quality by testing the inputs and predictions for changes over time and using change points in retraining decisions. There has been significant development in this area within the last two decades. While most of these methods are appropriate for the classification models, there are some new methods appropriate for regression problems as well.

Emre Velipasaoglu evaluates monitoring methods for applicability in modern fast data and streaming applications. Along the way, Emre discusses batch and active learning for retraining and illustrates how simple periodic retraining can be suboptimal. He also briefly covers adaptive learning algorithms.

Photo of Emre Velipasaoglu

Emre Velipasaoglu

Lightbend

Emre Velipasaoglu is principal data scientist at Lightbend. A machined learning expert, Emre previously served as principal scientist and senior manager at Yahoo! Labs. He has authored 23 peer-reviewed publications and nine patents in search, machine learning, and data mining. Emre holds a PhD in electrical and computer engineering from Purdue University and completed postdoctoral training at Baylor College of Medicine.