Presented By O'Reilly and Cloudera
Make Data Work
Sept 29–Oct 1, 2015 • New York, NY

Copycat: Fault tolerant streaming data ingestion powered by Apache Kafka

Neha Narkhede (Confluent)
1:15pm–1:55pm Thursday, 10/01/2015
Data Innovations
Location: 1 E18 / 1 E19 Level: Intermediate
Average rating: ****.
(4.50, 14 ratings)

Stream processing is skyrocketing in popularity, but most organizations still face a serious challenge: getting their data into (and out of) their stream processing framework of choice. Even a small organization may have dozens of data sources at course granularity (one or more databases, app events, log files, metrics, and more), and many thousands if you consider each table, application log type, or metric individually. Loading all this data into your stream processing framework often means tracking down dozens of third party connectors that vary greatly in quality and robustness.

On the other hand, Apache Kafka has quickly grown into a de facto standard for data storage with stream processing frameworks. All the major frameworks, including Samza, Spark Streaming, and Storm, include robust and actively-maintained connectors. It has achieved this status because it is a natural fit for streaming data: high throughput, low latency, scales to very large workloads, and uses retention policies to explicitly limit the stored data, something that needs to be done manually with most data stores. And it’s not only good for input and output; it’s also ideal for intermediate storage of derived data streams that may feed back into additional downstream processing jobs.

What if we leveraged Kafka to drastically simplify this ecosystem and improve the overall quality of connectors? This presentation introduces Copycat, a new framework for loading structured data into and out of Kafka, and therefore into and out of all the major stream processing frameworks. First we will describe the types of impedance mismatch that arise between data sources and sinks, that make connectors difficult to write well and lead to high variation in quality; then explain how introducing Kafka solves these problems. Next, we will dive into the details of Copycat and how it compares to some systems with similar goals, discussing key design decisions made that trade off between ease of use for writers of connectors, operational complexity, and reuse of existing connectors. Finally, we’ll discuss how standardizing on Copycat and Kafka for ingestion of data into your stream processing framework can ultimately lead to simplifying your entire data pipeline, and make ETL into your data warehouse as simple as adding another Copycat job.

Photo of Neha Narkhede

Neha Narkhede

Confluent

Neha Narkhede is the cofounder and CTO at Confluent, a company backing the popular Apache Kafka messaging system. Previously, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s petabyte-scale streaming infrastructure built on top of Apache Kafka and Apache Samza. Neha specializes in building and scaling large distributed systems and is one of the initial authors of Apache Kafka. A distributed systems engineer by training, Neha works with data scientists, analysts, and business professionals to move the needle on results.

Comments on this page are now closed.

Comments

Jatin Hansoty
10/01/2015 10:02am EDT

How does/will copycat improve onthe flume-kafka integration?