Presented By O'Reilly and Cloudera
Make Data Work
31 May–1 June 2016: Training
1 June–3 June 2016: Conference
London, UK

Hadoop's storage gap: Resolving transactional access/analytic performance trade-offs with Apache Kudu (incubating)

Todd Lipcon (Cloudera)
11:15–11:55 Thursday, 2/06/2016
Hadoop internals & development
Location: Capital Suite 15/16 Level: Intermediate
Tags: real-time
Average rating: ****.
(4.42, 12 ratings)

Prerequisite knowledge

Attendees should be familiar with Hadoop storage alternatives.

Description

Over the past several years, the Hadoop ecosystem has made great strides in its real-time access capabilities, narrowing the gap with regard to traditional database technologies. With systems such as Impala and Spark, analysts can now run complex queries or jobs over large datasets within a matter of seconds. With systems such as Apache HBase and Apache Phoenix, applications can achieve millisecond-scale random access to arbitrarily sized datasets.

Despite these advances, some important challenges remain that prevent many applications from transitioning to Hadoop-based architectures. Users are often caught between a rock and a hard place: columnar formats such as Apache Parquet offer extremely fast scan rates for analytics but little to no ability for real-time modification or row-by-row indexed access, while online systems such as HBase offer very fast random access but scan rates that are too slow for large-scale data warehousing workloads.

Todd Lipcon investigates the trade-offs between real-time transactional access and fast analytic performance from the perspective of storage engine internals and offers an overview of Kudu, the new addition to the open source Hadoop ecosystem that fills the gap described above, complementing HDFS and HBase to provide a new option to achieve fast scans and fast random access from a single API.

Photo of Todd Lipcon

Todd Lipcon

Cloudera

Todd Lipcon is an engineer at Cloudera, where he primarily contributes to open source distributed systems in the Apache Hadoop ecosystem. Previously, he focused on Apache HBase, HDFS, and MapReduce, where he designed and implemented redundant metadata storage for the NameNode (QuorumJournalManager), ZooKeeper-based automatic failover, and numerous performance, durability, and stability improvements. In 2012, Todd founded the Apache Kudu project and has spent the last three years leading this team.¬†Todd is a committer and PMC member on Apache HBase, Hadoop, Thrift, and Kudu, as well as a member of the Apache Software Foundation. Prior to Cloudera, Todd worked on web infrastructure at several startups and researched novel machine learning methods for collaborative filtering. Todd holds a bachelor’s degree with honors from Brown University.