Presented By O'Reilly and Cloudera
Make Data Work
31 May–1 June 2016: Training
1 June–3 June 2016: Conference
London, UK

Hadoop application architectures: Fraud detection

Jonathan Seidman (Cloudera), Mark Grover (Lyft), Gwen Shapira (Confluent), Ted Malaska (Capital One)
9:00–12:30 Wednesday, 1/06/2016
Hadoop internals & development
Location: Capital Suite 13 Level: Intermediate
Average rating: ***..
(3.50, 6 ratings)

Prerequisite knowledge

Attendees should have a basic understanding of the Hadoop ecosystem.

Materials or downloads needed in advance

You'll need a laptop if you'd like to follow along. Code for the demo and associated instructions are available here. Slides (subject to change) are available here.

Description

Implementing a scalable, low-latency architecture requires understanding a broad range of frameworks, such as Kafka, HBase, HDFS, Flume, Spark, Spark Streaming, and Impala, among many others. The good news is that there’s an abundance of materials—books, websites, conferences, etc.—for gaining a deep understanding of these related projects. The bad news is there’s still a scarcity of information on how to integrate these components to implement complete solutions.

Jonathan Seidman, Mark Grover, Gwen Shapira, and Ted Malaska walk attendees through an end-to-end case study of building a fraud detection system. Throughout, Jonathan, Mark, Gwen, and Ted cover best practices and considerations for architecting real-time applications on Hadoop and explain how to create a fraud detection application using those best practices through a live demo of the full project on Cloudera’a Quickstart VM. The code for the demo will be available on GitHub so the audience can follow along. However, the tutorial is not “hands-on,” meaning that Jonathan, Mark, Gwen, and Ted will not attempt to walk you through building the application on your own Hadoop installation. As a result, this tutorial will be most valuable for developers, architects, or project leads who are already knowledgeable about Hadoop or similar distributed data processing systems and are now looking for more insight into how it can be leveraged to implement real-world applications.

Topics include:

  • Modeling data in Kafka, HBase, and Hadoop and selecting optimal formats for storing data
  • Integrating multiple data-collection, processing, and storage systems
  • Collecting and analyzing event-based data, such as logs and machine-generated data, and storing that data in Hadoop
  • Querying and reporting on data
Photo of Jonathan Seidman

Jonathan Seidman

Cloudera

Jonathan Seidman is a software engineer on the cloud team at Cloudera. Previously, he was a lead engineer on the big data team at Orbitz Worldwide, helping to build out the Hadoop clusters supporting the data storage and analysis needs of one of the most heavily trafficked sites on the internet. Jonathan is a cofounder of the Chicago Hadoop User Group and the Chicago Big Data Meetup and a frequent speaker on Hadoop and big data at industry conferences such as Hadoop World, Strata, and OSCON. Jonathan is the coauthor of Hadoop Application Architectures from O’Reilly.

Photo of Mark Grover

Mark Grover

Lyft

Mark Grover is a product manager at Lyft. Mark is a committer on Apache Bigtop, a committer and PPMC member on Apache Spot (incubating), and a committer and PMC member on Apache Sentry. He has also contributed to a number of open source projects, including Apache Hadoop, Apache Hive, Apache Sqoop, and Apache Flume. He is a coauthor of Hadoop Application Architectures and wrote a section in Programming Hive. Mark is a sought-after speaker on topics related to big data. He occasionally blogs on topics related to technology.

Photo of Gwen Shapira

Gwen Shapira

Confluent

Gwen Shapira is a system architect at Confluent, where she helps customers achieve success with their Apache Kafka implementations. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data-processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the coauthor of Hadoop Application Architectures, and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

Photo of Ted Malaska

Ted Malaska

Capital One

Ted Malaska is a director of enterprise architecture at Capital One. Previously, he was the director of engineering in the Global Insight Department at Blizzard; principal solutions architect at Cloudera, helping clients find success with the Hadoop ecosystem; and a lead architect at the Financial Industry Regulatory Authority (FINRA). He has contributed code to Apache Flume, Apache Avro, Apache Yarn, Apache HDFS, Apache Spark, Apache Sqoop, and many more. Ted is a coauthor of Hadoop Application Architectures, a frequent speaker at many conferences, and a frequent blogger on data architectures.