Skip to main content
Make Data Work
Oct 15–17, 2014 • New York, NY

From Raw Data to Analytics with No ETL

Marcel Kornacker (Cloudera), Lenni Kuff (Facebook)
11:00am–11:40am Thursday, 10/16/2014
Hadoop Platform
Location: Hall A 23/24
Average rating: **...
(2.50, 26 ratings)
Slides:   1-PDF 

As data is ingested into Hadoop at an increasing rate from a diverse range of data sources, it is becoming more and more important for users that new data be accessible for analysis as quickly as possible — because “data freshness” can have a direct impact on business results.

In the traditional ETL process, raw data is transformed from the source into a target schema, possibly requiring flattening and condensing, and then loaded into an MPP DBMS. However, this approach has multiple drawbacks that make it unsuitable for real-time, “at-source” analytics — for example, the “ETL lag” reduces data freshness, and the inherent complexity of the process makes it costly to deploy and maintain and reduces the speed at which new analytic applications can be introduced.

In this talk, attendee will learn about Impala’s approach to on-the-fly, automatic data transformation, which in conjunction with the ability to handle nested structures such as JSON and XML documents, addresses the needs of at-source analytics — including direct querying of your input schema, immediate querying of data as it lands in HDFS, and high performance on par with specialized engines. This performance level is attained in spite of the most challenging and diverse input formats, which are addressed through an automated background conversion process into Parquet, the high-performance, open source columnar format that has been widely adopted across the Hadoop ecosystem.

Photo of Marcel Kornacker

Marcel Kornacker

Cloudera

Tech lead at Cloudera for new products. Graduated in 2000 with a PhD in databases from UC Berkeley, followed by engineering jobs at a few database-related startup companies. Marcel joined Google in 2003, where he worked on several ads serving and storage infrastructure projects. His last engagement was as the tech lead for the distributed query engine component of Google’s F1 project.

Photo of Lenni Kuff

Lenni Kuff

Facebook

Lenni Kuff is a software engineer at Cloudera working on the Impala project. Lenni graduated from the University of Wisconsin-Madison with degrees in Computer Science and Computer Engineering. Before joining Cloudera he worked at Microsoft on a number of projects including SQL Server storage engine, SQL Azure, and Hadoop on Azure.

Comments on this page are now closed.

Comments

Picture of Karthikeyan Nagalingam
Karthikeyan Nagalingam
10/16/2014 7:17am EDT

can you share the presentation? it’s quite informative

Regards,
Karthikeyan.N