Presented By
O’Reilly + Cloudera
Make Data Work
March 25-28, 2019
San Francisco, CA
Please log in

From flat files to deconstructed databases: The evolution and future of the big data ecosystem

Julien Le Dem (WeWork)
4:20pm5:00pm Wednesday, March 27, 2019
Average rating: ****.
(4.83, 6 ratings)

Who is this presentation for?

  • Data engineers, architects, and data scientists

Level

Intermediate

Prerequisite knowledge

  • A basic understanding of Hadoop and SQL

What you'll learn

  • Learn the evolution of the Hadoop ecosystem over the past 10 years
  • Explore the architecture of query engines and see how it relates to modern open source components of the Hadoop ecosystem

Description

Over the past 10 years, big data infrastructure has evolved from flat files in a distributed filesystem to an efficient ecosystem to a fully deconstructed and open source database with reusable components. With Hadoop, we started from a system that was good at looking for a needle in a haystack using snowplows. We had a lot of horsepower and scalability but lacked the subtlety and efficiency of relational databases. But since Hadoop provided the ultimate flexibility compared to the more constrained and rigid RDBMSs, we didn’t mind and plowed through.

However, machine learning, recommendations, matching, abuse detection, and data-driven products in general require a more flexible infrastructure. Over time, we started applying everything that had been known to the database world for decades to this new environment. We’d been told loud enough how Hadoop was a huge step backward. And it was true to some degree. The key difference was the flexibility of the Hadoop stack. There are many highly integrated components in a relational database and decoupling them took some time.

Today, we see the emergence of key components, such as optimizers, columnar storage, in-memory representation, table abstraction, and batch and streaming execution, as standards that provide the glue between the options available to process, analyze, and learn from our data. We’ve been deconstructing the tightly integrated relational database into flexible reusable open source components. Storage, compute, multitenancy, and batch or streaming execution are all decoupled and can be modified independently to fit every use case.

Julien Le Dem discusses the key open source components of the big data ecosystem—including Apache Calcite, Parquet, Arrow, Avro, and Kafka as well as batch and streaming systems—and explains how they relate to each other and how they make the ecosystem more of a database and less of a filesystem. (Parquet is the columnar data layout to optimize data at rest for querying. Arrow is the in-memory representation for maximum throughput execution and overhead-free data exchange. Calcite is the optimizer to make the most of our infrastructure capabilities.) Julien also explores the emerging components that are still missing or haven’t become standard yet to fully materialize the transformation to an extremely flexible database that lets you innovate with your data.

Photo of Julien Le Dem

Julien Le Dem

WeWork

Julien Le Dem is a principal engineer at WeWork. He’s also the coauthor of Apache Parquet and the PMC chair of the project, and he’s a committer and PMC member on Apache Pig, Apache Arrow, and a few other projects. Previously, he was an architect at Dremio; tech lead for Twitter’s data processing tools, where he also obtained a two-character Twitter handle (@J_); and a principal engineer and tech lead working on content platforms at Yahoo, where he received his Hadoop initiation. His French accent makes his talks particularly attractive.