Presented By O'Reilly and Cloudera
December 5-6, 2016: Training
December 6–8, 2016: Tutorials & Conference
Singapore

TRAINING: Spark foundations: Prototyping Spark use cases on Wikipedia datasets (Day 2)

Andy Huang (Servian Australia)
9:00am–5:00pm Tuesday, December 6, 2016
Location: 331-332
Tags: streaming
Average rating: ****.
(4.00, 1 rating)

What you'll learn

  • Explore the variety of ideal programming paradigms Spark makes possible
  • Description

    The real power and value proposition of Apache Spark is in building a unified use case that combines ETL, batch analytics, real-time stream analysis, machine learning, graph processing, and visualizations. Brian Clapper employs hands-on exercises using various Wikipedia datasets to illustrate the variety of ideal programming paradigms Spark makes possible. By the end of the training, you’ll be able to create proofs of concept and prototype applications using Spark.

    The course will consist of about 50% lecture and 50% hands-on labs. All participants will have access to Databricks Community Edition after class to continue working on labs and assignments.

    Note that most of the hands-on labs will be taught in Scala. (PySpark architecture and code examples will be covered briefly.)

    Who should attend?

    People with less than two months of hands-on experience with Spark

    Datasets explored in class:

    Outline

    Day 1

    9:00am–9:30am
    Introduction to Wikipedia and Spark
    Demo: Logging into Databricks and a tour of the user interface

    • Overview of the six Wikipedia data sources
    • Overview of Apache Spark APIs, libraries, and cluster architecture

    9:30am–10:30am
    DataFrames and Spark SQL
    Datasets used: Pageviews and Clickstream

    Use a SQLContext to create a DataFrame from different data sources (S3, JSON, RDBMS, HDFS, Cassandra, etc.)

    • Run some common operations on DataFrames to explore it
    • Cache a DataFrame into memory
    • Correctly size the number of partitions in a DF, including the size of each partition
    • Use the Spark CSV library from Spark Packages to read structured files
    • Mix SQL and DataFrame queries
    • Write a user-defined function (UDF)
    • Join two DataFrames
    • Overview of how Spark SQL’s Catalyst optimizer converts logical plans to optimized physical plans
    • Create visualizations using matplotlib, Databricks, and Google Visualizations
    • Use the Spark UI’s new SQL tab to troubleshoot performance issues (like input read size, identifying stage boundaries, and Cartesian products)

    10:30am–11:00am
    MORNING BREAK

    11:00am–12:30ppm
    DataFrames and Spark SQL (cont.)

    12:30pm–1:30pm
    LUNCH

    1:30pm–3:00pm
    Spark core architecture

    • Driver and executor JVMs
    • Local mode
    • Resource managers (standalone, YARN, Mesos)
    • How to optimally configure Spark (# of slots, JVM sizes, garbage collection, etc.)
    • PySpark architecture (different serialization, extra Python processes, UDFs are slower, etc.)
    • Reading Spark logs and stout on drivers vs. executors
    • Spark UI: exploring the user interface to understand what’s going on behind the scenes of your application (# of tasks, memory of executors, slow tasks, Spark master/worker UIs, etc.)

    3:00pm–3:30pm
    AFTERNOON BREAK

    3:30pm–5:00pm
    Resilient distributed datasets
    Datasets used: Pagecounts and English Wikipedia

    • When to use DataFrames vs. RDDs (type-safety, memory pressure, optimizations, i/o)
    • Two ways to create an RDD using a SparkContext: parallelize and read from an external data source
    • Common transformations and actions
    • Narrow vs. wide transformations and performance implications (pipelining, shuffle)
    • How transformations lazily build up a directed acyclic graph (DAG)
    • How a Spark application breaks down to Jobs > Stages > Tasks
    • Repartitioning an RDD (repartition vs. coalesce)
    • Different memory persistence levels for RDDs (memory, disk, serialization, etc.)
    • Different types of RDDs (HadoopRDD, ShuffledRDD, MapPartitionsRDD, PairRDD, etc.)
    • Spark UI: how to interpret the new DAG visualization, how to troubleshoot common performance issues like GroupByKey vs. ReduceByKey by looking at shuffle read/write info

    Day 2

    9:00am–9:30am
    Review of Day 1

    • DataFrames and Spark SQL
    • Spark architecture
    • RDDs

    9:30am–10:30am
    Shared variables (accumulators and broadcast variables)

    • Common use cases for shared variables
    • How accumulators can be used to implement distributed counters in parallel
    • Using broadcast variables to keep a read-only variable cached on each machine
    • Broadcast variables internals: BitTorrent implementation
    • Differences between broadcast variables and closures/lambdas (across stages vs. per stage)
    • Configuring the autoBroadcastJoinThreshold in Spark SQL to do more efficient joins

    10:30am–11:00am
    MORNING BREAK

    11:00am–12:00pm
    GraphX
    Datasets used: Clickstream

    • Use cases for graph processing
    • Graph processing fundamentals: vertex, edge (unidirectional, bidirectional), labels
    • Common graph algorithms: in-degree, out-degree, Pagerank, subGraph, shortest path, triplets
    • GraphX internals: How Spark stores large graphs in RDDs (VertexRDD, EdgeRDD, and routing table RDD)

    12:00pm–12:30pm
    Spark Streaming
    Datasets used: Live edits stream of multiple languages

    • Architecture of Spark Streaming: receivers, batch interval, block interval, direct pull
    • How the microbatch mechanism in Spark Streaming breaks up the stream into tiny batches and processes them
    • How to use a StreamingContext to create input DStreams (discretized streams)
    • Common transformations and actions on DStreams (map, filter, count, union, join, etc.)
    • Creating live, dynamically updated visualizations in Databricks (that update every 2 seconds)
    • Spark UI: how to use the new Spark Streaming UI to understand the performance of batch size vs. processing latency
    • Receiver vs. direct pull approach
    • High availability guidelines (WAL, checkpointing)
    • Window operations: apply transformations over a sliding window of data

    12:30pm–1:30pm
    LUNCH

    1:30pm–2:30pm
    Spark Streaming (cont.)

    2:30pm–3:00pm
    Spark machine learning
    Datasets used: English Wikipedia w/ edits

    • Common use cases of machine learning with Spark
    • When to use Spark MLlib (w/ RDDs) vs. Spark ML (w/ DataFrames)
    • ML Pipelines concepts: DataFrames, transformer, estimator, pipeline, parameter
    • Basic statistics with MLlib
    • Tf-idf (term frequency-inverse document frequency)
    • Streaming machine learning (k-means, linear regression, logistic regression)

    3:00pm–3:30pm
    AFTERNOON BREAK

    3:30pm–4:30pm
    Spark machine learning (cont.)

    4:30pm–5:00pm
    Spark R&D (optional)

    • Project Tungsten
    • New Datasets API
    • Upcoming developments: DataFrames in Streaming and GraphX, new MLlib algorithms, etc.
    • Berkeley Data Analytics Stack (Succinct, IndexedRDD, BlinkDB, SampleClean)

    Andy Huang

    Servian Australia

    Andy Huang is a managing consultant of big data analytics practice from Servian, a leading consulting company in Australia and New Zealand. He works with clients in Telco, Banking and Financial Services on big data analytics projects. His project portfolio includes use of Spark for data integration, streaming and large scale machine learning. He leads solution architecture, implementation and evangelize Apache Spark In the region.