Presented By O'Reilly and Cloudera
Make Data Work
March 13–14, 2017: Training
March 14–16, 2017: Tutorials & Conference
San Jose, CA

Architecting a next-generation data platform

Jonathan Seidman (Cloudera), Ted Malaska (Capital One), Mark Grover (Lyft), Gwen Shapira (Confluent)
1:30pm5:00pm Tuesday, March 14, 2017
Hadoop platform and applications
Location: LL21 E/F Level: Intermediate
Secondary topics:  Architecture
Average rating: ****.
(4.17, 6 ratings)

Who is this presentation for?

  • Software architects, software engineers, data engineers, and project leads

Prerequisite knowledge

  • An understanding of Hadoop concepts and the Hadoop ecosystem, traditional data management systems (e.g., relational databases), and programming languages and concepts

Materials or downloads needed in advance

  • Code for the demo and associated instructions are available here. Note that the tutorial is not a “hands-on” walk-through of building the application on your own Hadoop installation.
  • Slides are available here.

What you'll learn

  • Understand how new and existing tools in the Hadoop ecosystem can be integrated to implement new types of data processing and analysis
  • Learn considerations and best practices for implementing these applications


Apache Hadoop is rapidly moving from its batch processing roots to a more flexible platform supporting both batch and streaming workloads. Rapid advancements in the Hadoop ecosystem are causing a dramatic evolution in both the storage and processing capabilities of the Hadoop platform. These advancements include projects like:

  • Apache Kudu, a modern columnar data store that complements HDFS and Apache HBase by offering efficient analytical capabilities and fast inserts and updates with Hadoop.
  • Apache Kafka, which provides a high-throughput and highly reliable distributed message transport.
  • Apache Impala (incubating), a highly concurrent, massively parallel processing query engine for Hadoop.
  • Apache Spark, which is rapidly replacing frameworks such as MapReduce for processing data on Hadoop due to its efficient design and optimized use of memory. Spark components such as Spark Streaming and Spark SQL provide powerful near real-time processing, enabling new applications using the Hadoop platform.

While these advancements to the Hadoop platform are exciting, they also add a new array of tools that architects and developers need to understand when architecting solutions with Hadoop.

Using Entity 360 as an example, Jonathan Seidman, Ted Malaska, Mark Grover, and Gwen Shapira explain how to architect a modern, real-time big data platform leveraging recent advancements in the open source software world, using components like Kafka, Impala, Kudu, Spark Streaming, and Spark SQL with Hadoop to enable new forms of data processing and analytics. Along the way, they discuss considerations and best practices for utilizing these components to implement solutions, cover common challenges and how to address them, and provide practical advice for building your own modern, real-time big data architectures.

Topics include:

  • Accelerating data processing tasks such as ETL and data analytics by building near real-time data pipelines using tools like Kafka, Spark Streaming, and Kudu
  • Building a reliable, efficient data pipeline using Kafka and tools in the Kafka ecosystem along with Spark Streaming
  • Providing users with fast analytics on data with Impala and Kudu
  • Illustrating how these components complement the batch processing capabilities of Hadoop
  • Leveraging these capabilities along with other tools such as Spark MLlib and Spark SQL to provide sophisticated machine-learning and analytical capabilities for users
Photo of Jonathan Seidman

Jonathan Seidman


Jonathan Seidman is a software engineer on the cloud team at Cloudera. Previously, he was a lead engineer on the big data team at Orbitz, helping to build out the Hadoop clusters supporting the data storage and analysis needs of one of the most heavily trafficked sites on the internet. Jonathan is a cofounder of the Chicago Hadoop User Group and the Chicago Big Data Meetup and a frequent speaker on Hadoop and big data at industry conferences such as Hadoop World, Strata, and OSCON. Jonathan is the coauthor of Hadoop Application Architectures from O’Reilly.

Photo of Ted Malaska

Ted Malaska

Capital One

Ted Malaska is a director of enterprise architecture at Capital One. Previously, he was the director of engineering in the Global Insight Department at Blizzard; principal solutions architect at Cloudera, helping clients find success with the Hadoop ecosystem; and a lead architect at the Financial Industry Regulatory Authority (FINRA). He has contributed code to Apache Flume, Apache Avro, Apache Yarn, Apache HDFS, Apache Spark, Apache Sqoop, and many more. Ted is a coauthor of Hadoop Application Architectures, a frequent speaker at many conferences, and a frequent blogger on data architectures.

Photo of Mark Grover

Mark Grover


Mark Grover is a product manager at Lyft. Mark’s a committer on Apache Bigtop, a committer and PPMC member on Apache Spot (incubating), and a committer and PMC member on Apache Sentry. He’s also contributed to a number of open source projects, including Apache Hadoop, Apache Hive, Apache Sqoop, and Apache Flume. He’s a coauthor of Hadoop Application Architectures and wrote a section in Programming Hive. Mark is a sought-after speaker on topics related to big data. He occasionally blogs on topics related to technology.

Photo of Gwen Shapira

Gwen Shapira


Gwen Shapira is a system architect at Confluent, where she helps customers achieve success with their Apache Kafka implementations. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the coauthor of Hadoop Application Architectures, and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

Comments on this page are now closed.


Picture of Mark Grover
03/14/2017 4:28pm PDT

@Pavan, we recommend using the legacy memory manager for Spark (spark.memory.useLegacyMode=true) which you have to manage the storage memory fraction ( and the shuffle memory fraction (spark.shuffle.memoryFraction). Depending on where the OOM is happening (based on the stack trace), those are the two properties you should consider adjusting.

Picture of Mark Grover
03/14/2017 4:25pm PDT

@Shahid, good talking to you after the tutorial. As we talked, our use case is pretty well met with open source tools for SQL access (like Impala) and free form text search like Solr. So, we don’t see a compelling use-case for Splunk in architecture.

Pavan Naramreddy | EAS MANAGER II
03/14/2017 9:45am PDT

Spark SQL has high OOMS issue. What should an optimal system configuration mem vs shuffle space, any pointers to that knwledge base ?

Shahid Shafi | DIRECTOR,IT
03/14/2017 9:22am PDT

Why not Splunk?

Picture of Mark Grover
03/14/2017 3:26am PDT

The slides for the presentation are at