Presented By O'Reilly and Cloudera
Make Data Work
22–23 May 2017: Training
23–25 May 2017: Tutorials & Conference
London, UK

Reliable prediction: Handling uncertainty

Robin Senge (inovex)
11:0011:30 Tuesday, 23 May 2017
Hardcore Data Science
Location: London Suite 2/3
Level: Intermediate

Using machine learning to create predictive models enables many new use cases that traditional software engineering approaches would have never been able to. This is great. In the last few years, we have begun to understand that we are able to utilize these models in almost all possible fields, including driving a car or diagnosing disease. However, unlike traditional (bug-free) computer systems, systems that are based on predictive models always include a threat: uncertainty. For instance, how will a trained deep neural network behave in a certain difficult situation when driving a car?

The way that these systems are created and trained is to some extent similar to the way a human is trained: through experience. And as we all know, humans can certainly fail while learning a new task. Likewise these new systems can fail in situations they have not experienced before. Even worse, they will never be free from mistakes no matter how hard we train them, just as we ourselves won’t.

Thus, a prerequisite for using and dealing with the uncertainty involved in an automated decision is being able to measure it. As Drucker notes, "If you cannot measure it, you cannot control it.“ Typically, uncertainty or the probability of error is measured by a loss function on a hold-out set of validation examples. A probability calculated this way enables us to decide whether to accept the evaluated model or decline its application in a productive environment.

Reliable prediction is the ability of a predictive model to explicitly measure the uncertainty involved in a prediction without feedback. Robin Senge shares two approaches to measure different types of uncertainty involved in a prediction: conformal prediction by Shafer and Vovk and reliable classification by Senge and Hüllermeier. Besides precisely informing us about the overall uncertainty the prediction is contaminated with, both approaches identify different sources of uncertainty. Being able to distinguish these sources provides valuable information that can be used during model selection, feature selection, and even active learning scenarios.

Both methods are implemented in Spark and are ready for use.

Photo of Robin Senge

Robin Senge


Robin Senge is a senior big data scientist on an analytics team at inovex, where he applies machine learning to optimize supply chain processes for one of the biggest groups of retailers in Germany. Robin holds an MSc in computer science and a PhD from the University of Marburg, where his research at the Computational Intelligence Lab focused on machine learning and fuzzy systems.