Presented By O'Reilly and Cloudera
December 5-6, 2016: Training
December 6–8, 2016: Tutorials & Conference
Singapore

Spark foundations: Prototyping Spark use cases on Wikipedia datasets

Andy Huang (Servian Australia)
9:00am - 5:00pm Monday, December 5 & Tuesday, December 6
Location: 331-332
Tags: streaming

Participants should plan to attend both days of this 2-day training. Training passes do not include access to tutorials on Tuesday.

Average rating: *****
(5.00, 1 rating)

What you'll learn

  • Explore the variety of ideal programming paradigms Spark makes possible

Description

The real power and value proposition of Apache Spark is in building a unified use case that combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. Andy Huang employs hands-on exercises using various Wikipedia datasets to illustrate the variety of ideal programming paradigms Spark makes possible. By the end of the training, you’ll be able to create proofs of concept and prototype applications using Spark.

The course will consist of about 50% lecture and 50% hands-on labs. All participants will have access to Databricks Community Edition after class to continue working on labs and assignments.

Note that most of the hands-on labs will be taught in Scala. (PySpark architecture and code examples will be covered briefly.)

Who should attend?

People with less than two months of hands-on experience with Spark

Datasets explored in class:

Outline

Day 1

9:00am–9:30am
Introduction to Wikipedia and Spark
Demo: Logging into Databricks and a tour of the user interface

  • Overview of the six Wikipedia data sources
  • Overview of Apache Spark APIs, libraries, and cluster architecture

9:30am–10:30am
DataFrames and Spark SQL
Datasets used: Pageviews and Clickstream

Use a SQLContext to create a DataFrame from different data sources (S3, JSON, RDBMS, HDFS, Cassandra, etc.)

  • Run some common operations on DataFrames to explore it
  • Cache a DataFrame into memory
  • Correctly size the number of partitions in a DF, including the size of each partition
  • Use the Spark CSV library from Spark Packages to read structured files
  • Mix SQL and DataFrame queries
  • Write a user-defined function (UDF)
  • Join two DataFrames
  • Overview of how Spark SQL’s Catalyst optimizer converts logical plans to optimized physical plans
  • Create visualizations using matplotlib, Databricks, and Google Visualizations
  • Use the Spark UI’s new SQL tab to troubleshoot performance issues (like input read size, identifying stage boundaries, and Cartesian products)

10:30am–11:00am
MORNING BREAK

11:00am–12:30ppm
DataFrames and Spark SQL (cont.)

12:30pm–1:30pm
LUNCH

1:30pm–3:00pm
Spark core architecture

  • Driver and executor JVMs
  • Local mode
  • Resource managers (standalone, YARN, Mesos)
  • How to optimally configure Spark (# of slots, JVM sizes, garbage collection, etc.)
  • PySpark architecture (different serialization, extra Python processes, UDFs are slower, etc.)
  • Reading Spark logs and stout on drivers versus executors
  • Spark UI: Exploring the user interface to understand what’s going on behind the scenes of your application (# of tasks, memory of executors, slow tasks, Spark master/worker UIs, etc.)

3:00pm–3:30pm
AFTERNOON BREAK

3:30pm–5:00pm
Resilient distributed datasets
Datasets used: Pagecounts and English Wikipedia

  • When to use DataFrames versus RDDs (type-safety, memory pressure, optimizations, i/o)
  • Two ways to create an RDD using a SparkContext: Parallelize and read from an external data source
  • Common transformations and actions
  • Narrow versus wide transformations and performance implications (pipelining, shuffle)
  • How transformations lazily build up a directed acyclic graph (DAG)
  • How a Spark application breaks down to Jobs > Stages > Tasks
  • Repartitioning an RDD (repartition versus coalesce)
  • Different memory persistence levels for RDDs (memory, disk, serialization, etc.)
  • Different types of RDDs (HadoopRDD, ShuffledRDD, MapPartitionsRDD, PairRDD, etc.)
  • Spark UI: How to interpret the new DAG visualization, how to troubleshoot common performance issues like GroupByKey versus ReduceByKey by looking at shuffle read/write info

Day 2

9:00am–9:30am
Review of Day 1

  • DataFrames and Spark SQL
  • Spark architecture
  • RDDs

9:30am–10:30am
Shared variables (accumulators and broadcast variables)

  • Common use cases for shared variables
  • How accumulators can be used to implement distributed counters in parallel
  • Using broadcast variables to keep a read-only variable cached on each machine
  • Broadcast variables internals: BitTorrent implementation
  • Differences between broadcast variables and closures/lambdas (across stages versus per stage)
  • Configuring the autoBroadcastJoinThreshold in Spark SQL to do more efficient joins

10:30am–11:00am
MORNING BREAK

11:00am–12:00pm
GraphX
Datasets used: Clickstream

  • Use cases for graph processing
  • Graph processing fundamentals: Vertex, edge (unidirectional, bidirectional), labels
  • Common graph algorithms: In-degree, out-degree, Pagerank, subGraph, shortest path, triplets
  • GraphX internals: How Spark stores large graphs in RDDs (VertexRDD, EdgeRDD, and routing table RDD)

12:00pm–12:30pm
Spark Streaming
Datasets used: Live edits stream of multiple languages

  • Architecture of Spark Streaming: Receivers, batch interval, block interval, direct pull
  • How the microbatch mechanism in Spark Streaming breaks up the stream into tiny batches and processes them
  • How to use a StreamingContext to create input DStreams (discretized streams)
  • Common transformations and actions on DStreams (map, filter, count, union, join, etc.)
  • Creating live, dynamically updated visualizations in Databricks (that update every two seconds)
  • Spark UI: How to use the new Spark Streaming UI to understand the performance of batch size versus processing latency
  • Receiver versus direct pull approach
  • High-availability guidelines (WAL, checkpointing)
  • Window operations: Apply transformations over a sliding window of data

12:30pm–1:30pm
LUNCH

1:30pm–2:30pm
Spark Streaming (cont.)

2:30pm–3:00pm
Spark machine learning
Datasets used: English Wikipedia w/ edits

  • Common use cases of machine learning with Spark
  • When to use Spark MLlib (w/ RDDs) versus Spark ML (w/ DataFrames)
  • ML Pipelines concepts: DataFrames, transformer, estimator, pipeline, parameter
  • Basic statistics with MLlib
  • Tf-idf (term frequency-inverse document frequency)
  • Streaming machine learning (k-means, linear regression, logistic regression)

3:00pm–3:30pm
AFTERNOON BREAK

3:30pm–4:30pm
Spark machine learning (cont.)

4:30pm–5:00pm
Spark R&D (optional)

  • Project Tungsten
  • New Datasets API
  • Upcoming developments: DataFrames in Streaming and GraphX, new MLlib algorithms, etc.
  • Berkeley Data Analytics Stack (Succinct, IndexedRDD, BlinkDB, SampleClean)

Andy Huang

Servian Australia

Andy Huang is a managing consultant in the big data analytics practice at Servian, a leading consulting company in Australia and New Zealand, where he works with clients in telco, banking, and financial services on big data analytics projects. Andy’s project portfolio includes use of Spark for data integration, streaming, and large-scale machine learning. He also leads solution architecture and implementation and evangelizes Apache Spark in the region.

Comments on this page are now closed.

Comments

AmirBehzad Eslami
11/05/2016 9:27am SGT

Hello,

I hope you don’t mind me asking this, why the standard discounts are not available for training pass? Is there a any sort of discount that I could apply for training pass?

Please let me know,
-behzad

Picture of Kathy Yu
Kathy Yu
10/20/2016 10:21pm SGT

Hi Sridhar,

Discounts do not apply to the Platinum or Training passes. The 2-day trainings are not recorded on video. Conference tutorials, keynotes, and sessions are recorded, which will be available in Safari for viewing after the conference.

Kathy

10/20/2016 5:38pm SGT

I live in Sydney, Australia. Planning to sign-up for the Platinum pass for the meet that includes training. I want to do the Spark Foundation training on Mon/Tue.

Two questions:
1. Are there any discounts I could avail as someone coming from overseas?
2. Would I be getting recordings of the other two sessions?

Regards

Sridhar

10/20/2016 5:35pm SGT

Hello, Is the training conducted by Brian Clapper?

Thanks

Sridhar