Presented By O’Reilly and Cloudera
Make Data Work
21–22 May 2018: Training
22–24 May 2018: Tutorials & Conference
London, UK

The Cloud is Expensive so Build Your Own Redundant Hadoop Clusters

Stuart Pook (Criteo)
11:1511:55 Wednesday, 23 May 2018

Who is this presentation for?

DevOps Engineers, Big Data Engineer, Data Architect

Prerequisite knowledge

Hadoop or Bare Metal Experience

What you'll learn

Cloud vs In House, Data & Compute Redundancy, Building Hadoop on Bare Metal, Running big Hadoop clusters, Choosing Hardware, Working with Constructors

Description

Criteo has a main production cluster of 2000 nodes that runs over 300000 jobs/day and a backup cluster of 1200 nodes. Our job is to keep these clusters running together as we build a cluster to replace the backup cluster. These clusters are in our own data centres as running in the cloud would be many times more expensive.

These two clusters were meant to provide a redundant solution to Criteo’s storage and compute needs including a tested failover mechanism. We will explain our project, what went wrong, and our progress in building yet another cluster to finally create a computing system that will survive the loss of an entire data centre.

This presentation will also describe what we have learnt when building and running Hadoop clusters.

Building a cluster requires testing the hardware from several manufacturers and choosing the most cost effective option. We have now done these tests twice and can provide advice on how to do it right the first time.

Our tests were effective except for the RAID controller for our 35000 disks. We had so many problems using our new controller that we had to replace it and are now working with the constructors on a solution that will help us better manage our disks.

Hadoop, especially at that this scale, does not run itself, so what operational skills and tools are required to keep the clusters healthy, the data safe and the jobs running 24 hours a day every day?

Photo of Stuart Pook

Stuart Pook

Criteo

Stuart loves storage (208 PB at Criteo) and is part of Criteo’s Lake team that runs some small and two rather large Hadoop clusters. He also loves automation with Chef because configuring more than 3000 Hadoop
nodes by hand is just too slow. Before discovering Hadoop he developed
user interfaces and databases for biotech companies.

Stuart has presented at ACM CHI 2000, Devoxx 2016, NABD 2016, Hadoop Summit Tokyo 2016, Apache Big Data Europe 2016, Big Data Tech Warsaw 2017, and Apache Big Data North America 2017.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)