Presented By O'Reilly and Cloudera
Make Data Work
Dec 4–5, 2017: Training
Dec 5–7, 2017: Tutorials & Conference
Singapore

The stream processor as a database: Building event-driven applications with Apache Flink

Tzu-Li (Gordon) Tai (data Artisans)
12:05pm12:45pm Thursday, December 7, 2017
Average rating: *****
(5.00, 1 rating)

Who is this presentation for?

  • Data engineers and architects, DevOps engineers, and software engineers

Prerequisite knowledge

  • A basic understanding of distributed stream processing and event-driven applications

What you'll learn

  • Learn how Apache Flink can be used to replace databases in event-driven applications
  • Explore the key features of Apache Flink's stateful stream processing runtime that powers this vision
  • Understand how the key features come together as a working application

Description

Apache Flink is evolving from a framework for streaming data analytics to a platform that offers a foundation for event-driven applications. Over the past year, an increasing number of users have put Flink into the center of their business logic and entrusted it with their most valuable assets: their application data. For example, entire social networks have been built on top of Flink, using Flink to replace data management aspects that are typically handled by a database in more conventional architectures.

Powering this evolution is Flink’s sophisticated state management features and its streams-and-snapshots approach to stateful stream processing. Tzu-Li (Gordon) Tai explores Flink’s stateful stream processing runtime, where it fits in when building event-driven applications, and how it can even replace databases, highlighting key features like consistent point-in-time savepoints, event-time awareness, flexible rescaling, extremely large state handling, state (schema) evolution, and queryable state. Along the way, Gordon also demonstrates these features in action.

Photo of Tzu-Li (Gordon) Tai

Tzu-Li (Gordon) Tai

data Artisans

Tzu-Li (Gordon) Tai is a software engineer at data Artisans and an Apache Flink committer and PMC member. His main contributions on Flink include work on Flink’s streaming connectors (Kafka, AWS Kinesis, Elasticsearch) and its type serialization stack and state management capabilities. Gordon is a frequent speaker at conferences such as Flink Forward, Flink meetups in Berlin and Taiwan, and several Taiwan-based conferences on the Hadoop ecosystem and data engineering in general.