Presented By O'Reilly and Cloudera
Make Data Work
December 1–3, 2015 • Singapore

Hadoop's storage gap: Resolving transactional access/analytic performance trade-offs with Kudu

Todd Lipcon (Cloudera)
11:00am–11:40am Wednesday, 12/02/2015
Hadoop Platform
Location: 334-335 Level: Intermediate
Tags: featured
Average rating: ****.
(4.33, 6 ratings)

Prerequisite Knowledge

Basic knowledge of Hadoop ecosystem

Description

Over the past several years, the Hadoop ecosystem has made great strides in its real-time access capabilities, narrowing the gap compared to traditional database technologies. With systems such as Impala and Spark, analysts can now run complex queries or jobs over large datasets within a matter of seconds. With systems such as Apache HBase and Apache Phoenix, applications can achieve millisecond-scale random access to arbitrarily-sized datasets.

Despite these advances, some important gaps remain that prevent many applications from transitioning to Hadoop-based architectures. Users are often caught between a rock and a hard place: columnar formats such as Apache Parquet offer extremely fast scan rates for analytics, but little to no ability for real-time modification or row-by-row indexed access. Online systems such as HBase offer very fast random access, but scan rates that are too slow for large scale data warehousing workloads.

This session will investigate the tradeoffs between real-time transactional access and fast analytic performance from the perspective of storage engine internals. We will discuss recent advances from academic literature and commercial systems, evaluate benchmark results from current generation Hadoop technologies, and propose potential ways ahead for the Hadoop ecosystem to conquer its newest set of challenges.

Photo of Todd Lipcon

Todd Lipcon

Cloudera

Todd Lipcon is an engineer at Cloudera, where he primarily contributes to open source distributed systems in the Apache Hadoop ecosystem. Previously, he focused on Apache HBase, HDFS, and MapReduce, where he designed and implemented redundant metadata storage for the NameNode (QuorumJournalManager), ZooKeeper-based automatic failover, and numerous performance, durability, and stability improvements. In 2012, Todd founded the Apache Kudu project and has spent the last three years leading this team.¬†Todd is a committer and PMC member on Apache HBase, Hadoop, Thrift, and Kudu, as well as a member of the Apache Software Foundation. Prior to Cloudera, Todd worked on web infrastructure at several startups and researched novel machine-learning methods for collaborative filtering. Todd holds a bachelor’s degree with honors from Brown University.