Presented By O'Reilly and Cloudera
Make Data Work
22–23 May 2017: Training
23–25 May 2017: Tutorials & Conference
London, UK

In-Person Training
Spark foundations: Prototyping Spark use cases on Wikipedia datasets

Zoltan Toth (Prezi.com)
Monday, 22 May & Tuesday, 23 May, 9:00 - 17:00
Spark & beyond
Location: Capital Suite 7
Secondary topics:  Text Analysis and Mining
See pricing & packages

This course will sell out—sign up today!

Participants should plan to attend both days of this 2-day training course. Platinum and Training passes do not include access to tutorials on Tuesday.

The real power and value proposition of Apache Spark is in building a unified use case that combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. Brian Clapper employs hands-on exercises using various Wikipedia datasets to illustrate the variety of ideal programming paradigms Spark makes possible.

What you'll learn, and how you can apply it

  • Understand the variety of ideal programming paradigms Spark makes possible

Prerequisites:

  • A basic understanding of software development
  • Some experience coding in Python, Java, SQL, Scala, or R
  • Familiarity with Scala programming basics (check out Scala Basics and Atomic Scala)

The real power and value proposition of Apache Spark is in building a unified use case that combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. Zoltan Toth employs hands-on exercises using various Wikipedia datasets to illustrate the variety of ideal programming paradigms Spark makes possible. By the end of the training, you’ll be able to create proofs of concept and prototype applications using Spark.

The course will consist of about 50% lecture and 50% hands-on labs. All participants will have access to Databricks Community Edition after class to continue working on labs and assignments.

Note that most of the hands-on labs will be taught in Scala. (PySpark architecture and code examples will be covered briefly.)

Who should attend?

People with less than two months of hands-on experience with Spark

Datasets explored in class:

Outline

Day 1

9:00am – 9:30am
Introduction to Wikipedia and Spark
Demo: Logging into Databricks and a tour of the user interface

  • Overview of the six Wikipedia data sources
  • Overview of Apache Spark APIs, libraries, and cluster architecture

9:30am – 10:30am
DataFrames and Spark SQL
Datasets used: Pageviews and Clickstream

Use a SQLContext to create a DataFrame from different data sources (S3, JSON, RDBMS, HDFS, Cassandra, etc.)

  • Run some common operations on DataFrames to explore it
  • Cache a DataFrame into memory

  • Correctly size the number of partitions in a DF, including the size of each partition
  • Use the Spark CSV library from Spark Packages to read structured files
  • Mix SQL and DataFrame queries
  • Write a user-defined function (UDF)
  • Join two DataFrames
  • Overview of how Spark SQL’s Catalyst optimizer converts logical plans to optimized physical plans
  • Create visualizations using matplotlib, Databricks, and Google Visualizations
  • Use the Spark UI’s new SQL tab to troubleshoot performance issues (like input read size, identifying stage boundaries, and Cartesian products)

10:30am – 11:00am
MORNING BREAK

11:00am – 12:30pm
DataFrames and Spark SQL (cont.)

12:30pm – 1:30pm
LUNCH

1:30pm – 3:00pm
Spark core architecture

  • Driver and executor JVMs
  • Local mode
  • Resource managers (standalone, YARN, Mesos)
  • How to optimally configure Spark (# of slots, JVM sizes, garbage collection, etc.)
  • PySpark architecture (different serialization, extra Python processes, UDFs are slower, etc.)
  • Reading Spark logs and stout on drivers versus executors
  • Spark UI: Exploring the user interface to understand what’s going on behind the scenes of your application (# of tasks, memory of executors, slow tasks, Spark master/worker UIs, etc.)

3:00pm – 3:30pm
AFTERNOON BREAK

3:30pm – 5:00pm
Resilient distributed datasets
Datasets used: Pagecounts and English Wikipedia

  • When to use DataFrames versus RDDs (type-safety, memory pressure, optimizations, i/o)
  • Two ways to create an RDD using a SparkContext: Parallelize and read from an external data source
  • Common transformations and actions
  • Narrow versus wide transformations and performance implications (pipelining, shuffle)
  • How transformations lazily build up a directed acyclic graph (DAG)
  • How a Spark application breaks down to Jobs > Stages > Tasks
  • Repartitioning an RDD (repartition versus coalesce)
  • Different memory persistence levels for RDDs (memory, disk, serialization, etc.)
  • Different types of RDDs (HadoopRDD, ShuffledRDD, MapPartitionsRDD, PairRDD, etc.)
  • Spark UI: How to interpret the new DAG visualization, how to troubleshoot common performance issues like GroupByKey versus ReduceByKey by looking at shuffle read/write info

Day 2

9:00am – 9:30am
Review of Day 1

  • DataFrames and Spark SQL
  • Spark architecture
  • RDDs

9:30am – 10:30am
Shared variables (accumulators and broadcast variables)

  • Common use cases for shared variables
  • How accumulators can be used to implement distributed counters in parallel
  • Using broadcast variables to keep a read-only variable cached on each machine
  • Broadcast variables internals: BitTorrent implementation
  • Differences between broadcast variables and closures/lambdas (across stages versus per stage)
  • Configuring the autoBroadcastJoinThreshold in Spark SQL to do more efficient joins

10:30am – 11:00am
MORNING BREAK

11:00am – 12:00pm
GraphX
Datasets used: Clickstream

  • Use cases for graph processing
  • Graph processing fundamentals: Vertex, edge (unidirectional, bidirectional), labels
  • Common graph algorithms: In-degree, out-degree, Pagerank, subGraph, shortest path, triplets
  • GraphX internals: How Spark stores large graphs in RDDs (VertexRDD, EdgeRDD, and routing table RDD)

12:00pm – 12:30pm
Spark Streaming
Datasets used: Live edits stream of multiple languages

  • Architecture of Spark Streaming: Receivers, batch interval, block interval, direct pull
  • How the microbatch mechanism in Spark Streaming breaks up the stream into tiny batches and processes them
  • How to use a StreamingContext to create input DStreams (discretized streams)
  • Common transformations and actions on DStreams (map, filter, count, union, join, etc.)
  • Creating live, dynamically updated visualizations in Databricks (that update every two seconds)
  • Spark UI: How to use the new Spark Streaming UI to understand the performance of batch size versus processing latency
  • Receiver versus direct pull approach
  • High-availability guidelines (WAL, checkpointing)
  • Window operations: Apply transformations over a sliding window of data

12:30pm – 1:30pm
LUNCH

1:30pm – 2:30pm
Spark Streaming (cont.)

2:30pm – 3:00pm
Spark machine learning
Datasets used: English Wikipedia w/ edits

  • Common use cases of machine learning with Spark
  • When to use Spark MLlib (w/ RDDs) versus Spark ML (w/ DataFrames)
  • ML Pipelines concepts: DataFrames, transformer, estimator, pipeline, parameter
  • Basic statistics with MLlib
  • Tf-idf (term frequency-inverse document frequency)
  • Streaming machine learning (k-means, linear regression, logistic regression)

3:00pm – 3:30pm
AFTERNOON BREAK

3:30pm – 4:30pm
Spark machine learning (cont.)

4:30pm – 5:00pm
Spark R&D (optional)

  • Project Tungsten
  • New Datasets API
  • Upcoming developments: DataFrames in Streaming and GraphX, new MLlib algorithms, etc.
  • Berkeley Data Analytics Stack (Succinct, IndexedRDD, BlinkDB, SampleClean)

About your instructor

I am data analyst at Prezi.com

Twitter for zoltanctoth

Conference registration

Get the Platinum pass or the Training pass to add this course to your package. .

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)