Presented By O'Reilly and Cloudera
Make Data Work
Sept 29–Oct 1, 2015 • New York, NY

Data liberation and data integration with Kafka

Martin Kleppmann (University of Cambridge)
4:35pm–5:15pm Wednesday, 09/30/2015
Data Innovations
Location: 1 E18 / 1 E19 Level: Intermediate
Average rating: ****.
(4.14, 14 ratings)
Slides:   1-PDF 

Apache Kafka is a popular open source message broker for high-throughput real-time event data, such as user activity logs or IoT sensor data. It originated at LinkedIn, where it reliably handles around a trillion messages per day.

What is less widely known: Kafka is also well suited for extracting data from existing databases, and making it available for analysis or for building data products. Unlike slow batch-oriented ETL, Kafka can make database data available to consumers in real time, while also allowing efficient archiving to HDFS, for use in Spark, Hadoop, or data warehouses.

When data science and product teams can process operational data in real time, and combine it with user activity logs or sensor data, it is a potent mixture. Having all the data centrally available in a stream data platform is an exciting enabler for data-driven innovation.

In this talk, we will discuss what a Kafka-based stream data platform looks like, and how it is useful:

  • Examples of the kinds of problems you can solve with Kafka
  • Extracting real-time data feeds from databases, and sending them to Kafka
  • Using Avro for schema management and future-proofing your data
  • Designing your data pipelines to be resilient, but also flexible and amenable to change.
Photo of Martin Kleppmann

Martin Kleppmann

University of Cambridge

Martin Kleppmann is a researcher in distributed systems at the University of Cambridge. Previously, he cofounded and sold two startups and worked on large-scale data infrastructure at internet companies including LinkedIn. Martin is the author of Designing Data-Intensive Applications from O’Reilly.