Presented By O’Reilly and Intel AI
Put AI to work
8-9 Oct 2018: Training
9-11 Oct 2018: Tutorials & Conference
London, UK

Sense-Infer-Act-Learn: A model for trustworthy AI

Rupert Steffner (WUNDER)
13:45–14:25 Wednesday, 10 October 2018
Implementing AI
Location: King's Suite - Balmoral
Secondary topics:  Ethics, Privacy, and Security, Retail and e-commerce
Average rating: ***..
(3.00, 1 rating)

Who is this presentation for?

  • AI architects, AI engineers, and deep learning engineers

Prerequisite knowledge

  • Familiarity with AI implementations and real-time streaming analytics and topologies
  • An intermediate understanding of architecture

What you'll learn

  • Explore Sense-Infer-Act-Learn, a logical AI execution model to enable a more trustworthy AI

Description

While advances in machine learning have led to fantastic results, the increase in automated decision making, along with doubts in the quality of algorithmic decisions, has driven demand for transparency and accountability in AI. This new demand for better explanation comes from both inside the AI community and outside.

There’s a tendency to make deep learning models more capable by integrating data preparation, reasoning, and so forth. While this seems to be a good idea at first, there are considerable downsides. The first and foremost concern is the lack of explanation, as black-box models never provide transparency. The second is the absence of stateful processing and lack of persisted data across the layers. The third concern is performance in milliseconds when it comes to edge intelligence. Thus, we should consider different execution architecture models that are more lightweight, modular, and transparent.

Regarding the technology stack, you can potentially build the latter explained logical model by using Spark streaming, Apache Flink, Apache Ignite, or any other stream processing framework, if you decide that your choice is meeting the needs in terms of stability and performance. Or you could decide to build a custom application based on complex event processing built on a streaming analytics topology.

Rupert Steffner offers an overview of Sense-Infer-Act-Learn, a logical AI execution model to enable a more trustworthy AI. Sense-Infer-Act-Learn was inspired by various models, including the American jet fighter’s OODA loop, the cognitive science stimulus-response model, and the AI agent-reward approach to engineer a consolidated logical architecture model for real-time AI.

Sense-Infer-Act-Learn is a modular layer architecture, with each layer having its tasks to perform.

Sense:
Sensors set the context for what to observe. For autonomous driving, this is the detection of objects and environments. For customer behavior, it’s human cognition—conceptualizing how humans think. Most time, engineering the sensor context is a task beyond tech. With AI, there may be several types of sensors. There might be implicit sensing if you take associated human cognition from preferred products or explicit sensing if the data directly corresponds to what you observe. In technical terms, this layer handles data ingest and could contain first data preparation features. When working with distributed intelligence, the first intelligence is happening right here, for example if you recalibrate the sensors to their latest state.

Infer:
Inference is engineering the data signals from the noise. It’s critical to respect this real-time data preparation to get consolidated states of belief in your feature data. Otherwise, messy data will flow into you analytic models, disturbing their decision quality. Beside data preparation, you can apply some tricky mechanisms to get rid of the data sparsity problem when it comes to cold starts. One method is to apply associative maps to combine signalized data with semantic data or you could simply join data from various source (e.g., clickstream data and product data) to enrich your data. Ideally, you should start here building fast data stores, for example the real-time customer profile, as this layer is the earliest point in the topology where you can operate on cleansed and valid data. It’s a good idea to further conceptualize your data (heatmaps, outliers, etc.) in this layer to get the maximum of meaning from your data.

Act:
This layer is for decision making and acting and is performing all kinds of analytical tasks. One of the most used analytical features is the calculation of the next-best offer (i.e., the prediction of what customers are going to buy next). Another is the next-best activity (i.e., analyzing if the system should continue to show further products or switch to another service mode like trying to retain the customer. If the system works similar to Playbuzz or other quiz applications, there could be analytical features like next-best insight (i.e., the systems tries to find out what it wants to know next from the user). Apart from the various business analytics, this layer can be used to negotiate different contexts, like the real-time context with the historic context, or analyze anomalies in the data.

Learn:
This AI models loops with every user event to make the learning fast. The basic notion for learning derives from the nature of the agent-reward model. Learning applications will outperform any other type of applications in the long run, and we might see the highest AI potential in this layer. There have already been great advances with reinforcement learning, but this seems just to be the starting point to look at learning in a far broader sense. There could be continued progress in approaches like MIT’s one-shot learning to improve fast learning with every click. Neuroscience is providing some great hints for slow learning when we are going to mimic human’s replay mode during bedtime. Another kind of learning mix is based on advanced human-in-the-loop approaches like deep symbolic reinforcement learning to combine symbol grounding with pattern recognition. Overall, we could see an interesting mix in the near future and how learning might be organized in a more cascaded style.

Applying the Sense-Infer-Act-Learn model should lead to a more trustful AI in several respects. When it is implemented with stateful processing, this potentially gives access to data across all layers, which can be used for managing data preparation and data quality to work with better trusted data in the analytical models. Second, persistence in managed data stores could be opened to give users access to their real-time customer profile, let them manage their data, and start a dialogue about how to use the data. The even bigger source of trust might be the explanatory inference. Explaining the data and the results could even be a matter of compliance (e.g., accountability), but with less critical decisions and outcomes, it will at least be the fuel of a growing trust economy while we continue to automate decision making for nearly every aspect of our lives.

Photo of Rupert Steffner

Rupert Steffner

WUNDER

Rupert Steffner is the founder of WUNDER, a cognitive AI startup that is helping consumers find the products they love. Rupert has over 25 years of experience in designing and implementing highly sophisticated technical and business solutions, with a focus on customer-centric marketing. Previously, Rupert was chief platform architect of Otto Group’s new business intelligence platform BRAIN and head of BI at Groupon EMEA and APAC. He also served as business intelligence leader for several European and US companies in the ecommerce, retail, finance, and telco industries. He holds an MBA from WU Vienna and was head of the Marketing Department at the University of Applied Sciences in Salzburg.