Presented By O'Reilly and Cloudera
Make Data Work
31 May–1 June 2016: Training
1 June–3 June 2016: Conference
London, UK
SOLD OUT

Spark foundations: Prototyping Spark use cases on Wikipedia datasets

Stephane Rion (Big Data Partnership)
9:00–17:00 Tuesday, 31/05/2016 - Wednesday, 01/06/2016
Training
Location: Capital Suite 17

All training courses take place 9:00 - 17:00, Tuesday and Wednesday. In order to maintain a high level of hands-on learning and instructor interaction, each training is limited in size.

Participants should plan to attend both days of this 2-day training. Training passes do not include access to tutorials on Wednesday.

Average rating: ***..
(3.50, 2 ratings)

Prerequisite knowledge

Attendees should have a basic understanding of software development; some experience coding in Python, Java, SQL, Scala, or R; an up-to-date version of Chrome or Firefox (Internet Explorer not supported); and Scala programming basics (check out Scala Basics and Atomic Scala).

Description

The real power and value proposition of Apache Spark is in building a unified use case that combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. Stephane Rion employs hands-on exercises using explore various Wikipedia datasets to illustrate the variety of ideal programming paradigms Spark makes possible. By the end of the training, attendees will be able to create proofs of concept and prototype applications using Spark.

The course will consist of about 50% lecture and 50% hands-on labs. All attendees will have access to Databricks for one month after class to continue working on labs and assignments.

Note that most of the hands-on labs in class will be taught in Scala. (PySpark architecture and code examples will be covered briefly.)

Who should attend?

People with less than two months of hands-on experience with Spark

Datasets explored in class:

Outline

Day 1

30 mins: Introduction to Wikipedia and Spark
Demo: Logging into Databricks and a tour of the user interface

  • Overview of the six Wikipedia data sources
  • Overview of Apache Spark APIs, libraries, and cluster architecture

2 hours: DataFrames and Spark SQL
Datasets used: Pageviews and Clickstream

  • How to use a SQLContext to create a DataFrame from different data sources (S3, JSON, RDBMS, HDFS, Cassandra, etc.)
  • Run some common operations on DataFrames to explore it
  • Cache a DataFrame into memory
  • Correctly size the number of partitions in a DF, including the size of each partition
  • How to use the Spark CSV library from Spark Packages to read structured files
  • Mix SQL and DataFrame queries
  • Write a user-defined function (UDF)
  • Join two DataFrames
  • Overview of how Spark SQL’s Catalyst optimizer converts logical plans to optimized physical plans
  • Create visualizations using Matplotlib, Databricks, and Google Visualizations
  • Use the Spark UI’s new SQL tab to troubleshoot performance issues (like input read size, identifying stage boundaries, and Cartesian products)

1.5 hours: Spark core architecture

  • Driver and executor JVMs
  • Local mode
  • Resource managers (standalone, YARN, Mesos)
  • How to optimally configure Spark (# of slots, JVM sizes, garbage collection, etc.)
  • PySpark architecture (different serialization, extra Python processes, UDFs are slower, etc.)
  • Reading Spark logs and stout on drivers vs. executors
  • Spark UI: exploring the user interface to understand what’s going on behind the scenes of your application (# of tasks, memory of executors, slow tasks, Spark master/worker UIs, etc.)

1.5 hours: Resilient distributed datasets
Datasets used: Pagecounts and English Wikipedia

  • When to use DataFrames vs. RDDs (type-safety, memory pressure, optimizations, i/o)
  • Two ways to create an RDD using a SparkContext: parallelize and read from an external data source
  • Common transformations and actions
  • Narrow vs. wide transformations and performance implications (pipelining, shuffle)
  • How transformations lazily build up a directed acyclic graph (DAG)
  • How a Spark application breaks down to Jobs > Stages > Tasks
  • Repartitioning an RDD (repartition vs. coalesce)
  • Different memory persistence levels for RDDs (memory, disk, serialization, etc.)
  • Different types of RDDs (HadoopRDD, ShuffledRDD, MapPartitionsRDD, PairRDD, etc.)
  • Spark UI: how to interpret the new DAG visualization, how to troubleshoot common performance issues like GroupByKey vs. ReduceByKey by looking at shuffle read/write info

Day 2

30 mins: Review of Day 1

  • DataFrames and Spark SQL
  • Spark architecture
  • RDDs

1 hour: Shared variables (accumulators and broadcast variables)

  • Common use cases for shared variables
  • How accumulators can be used to implement distributed counters in parallel
  • Using broadcast variables to keep a read-only variable cached on each machine
  • Broadcast variables internals: BitTorrent implementation
  • Differences between broadcast variables and closures/lambdas (across stages vs. per stage)
  • Configuring the autoBroadcastJoinThreshold in Spark SQL to do more efficient joins

1 hour: GraphX
Datasets used: Clickstream

  • Use cases for graph processing
  • Graph processing fundamentals: vertex, edge (unidirectional, bidirectional), labels
  • Common graph algorithms: in-degree, out-degree, Pagerank, subGraph, shortest path, triplets
  • GraphX internals: How Spark stores large graphs in RDDs (VertexRDD, EdgeRDD, and routing table RDD)

1.5 hours: Spark Streaming
Datasets used: Live edits stream of multiple languages

  • Architecture of Spark Streaming: receivers, batch interval, block interval, direct pull
  • How the microbatch mechanism in Spark Streaming breaks up the stream into tiny batches and processes them
  • How to use a StreamingContext to create input DStreams (discretized streams)
  • Common transformations and actions on DStreams (map, filter, count, union, join, etc.)
  • Creating live, dynamically updated visualizations in Databricks (that update every 2 seconds)
  • Spark UI: how to use the new Spark Streaming UI to understand the performance of batch size vs. processing latency
  • Receiver vs. direct pull approach
  • High availability guidelines (WAL, checkpointing)
  • Window operations: apply transformations over a sliding window of data

1.5 hours: Spark machine learning
Datasets used: English Wikipedia w/ edits

  • Common use cases of machine learning with Spark
  • When to use Spark MLlib (w/ RDDs) vs. Spark ML (w/ DataFrames)
  • ML Pipelines concepts: DataFrames, transformer, estimator, pipeline, parameter
  • Basic statistics with MLlib
  • Topic modeling with LDA (latent Dirichlet allocation w/ GraphX)
  • Word2Vec to convert words into feature vectors
  • Tf-idf (term frequency-inverse document frequency)
  • Streaming machine learning (k-means, linear regression, logistic regression)

30 mins (optional): Spark R&D

  • Project Tungsten
  • New Datasets API
  • Upcoming developments: DataFrames in Streaming and GraphX, new MLlib algorithms, etc.
  • Berkeley Data Analytics Stack (Succinct, IndexedRDD, BlinkDB, SampleClean)
Photo of Stephane Rion

Stephane Rion

Big Data Partnership

Stephane Rion is a senior data scientist at Big Data Partnership, where he helps clients get insight into their data by developing scalable analytical solutions in industries such as finance, gaming, and social services. Stephane has a strong background in machine learning and statistics with over 6 years’ experience in data science and 10 years’ experience in mathematical modeling. He has solid hands-on skills in machine learning at scale with distributed systems like Apache Spark, which he has used to develop production rate applications. In addition to Scala with Spark, Stephane is fluent in R and Python, which he uses daily to explore data, run statistical analysis, and build statistical models. He was the first Databricks-certified Spark instructor in EMEA. Stephane enjoys splitting his time between working on data science projects and teaching Spark classes, which he feels is the best way to remain at the forefront of the technology and capture how people are attempting to use Spark within their businesses.

Comments on this page are now closed.

Comments

P Oe
29/03/2016 10:04 BST

What version of Spark will be used for this training ?