Presented By O'Reilly and Cloudera
Make Data Work
Feb 17–20, 2015 • San Jose, CA

Going Real-time: Data Collection and Stream Processing with Apache Kafka

Jay Kreps (Confluent)
10:40am–11:20am Thursday, 02/19/2015
Hadoop & Beyond
Location: 230 C
Average rating: ****.
(4.85, 13 ratings)
Slides:   1-PPTX 

What happens if you take everything that is happening in your company—every click, every impression, every database change, every application log—and make it all available as a real-time stream of well structured data?

I will discuss the experience at LinkedIn and elsewhere moving from batch-oriented ETL to real-time streams. I’ll talk about how the design and implementation of Apache Kafka was driven by this goal of acting as a real-time platform for event data. I will cover some of the challenges of scaling Kafka to hundreds of billions of events per day and making data available to thousands of users, applications, and data systems in a self-service fashion.

I will describe how real-time streams can become the source of ETL into Hadoop or a relational data warehouse, and how real-time data can supplement the role of batch-oriented analytics in Hadoop or a traditional data warehouse.

I will also describe how applications and stream processing systems such as Storm or Samza can make use of these feeds for sophisticated real-time data processing as events occur.

Photo of Jay Kreps

Jay Kreps


Jay is one of the primary architects for LinkedIn where he focuses on data infrastructure and data-driven products.

He was among the original authors of a number of open source projects in the scalable data systems space, including Voldemort, Azkaban, and Kafka, and Samza.

He has spent equal time working on innovative data products such as predicting professional relationships (“People You May Know”), collaborative filtering, and other data-driven products.

Comments on this page are now closed.


Rajesh Haran
02/19/2015 11:28am PST

can you kindly post the slides ?