Presented By O’Reilly and Cloudera
Make Data Work
March 5–6, 2018: Training
March 6–8, 2018: Tutorials & Conference
San Jose, CA

Apache Spark programming (Day 2)

Brooke Wenig (Databricks)
Location: 212 A-B

Who is this presentation for?

  • You're a software developer, data analyst, data engineer, or data scientist who wants to use Apache Spark for machine learning and data science.

Prerequisite knowledge

  • Experience coding in Python or Scala and using Spark
  • A basic understanding of data science topics and terminology
  • Familiarity with DataFrames (useful but not required)

What you'll learn

  • Understand Spark’s fundamental mechanics and Spark internals
  • Learn how to use the core Spark APIs to operate on data, build data pipelines and query large datasets using Spark SQL and DataFrames, analyze Spark jobs using the administration UIs and logs inside Databricks, and create Structured Streaming and machine learning jobs
  • Be able to articulate and implement typical use cases for Spark

Description

Brooke Wenig walks you through the core APIs for using Spark, fundamental mechanisms and basic internals of the framework, SQL and other high-level data access tools, and Spark’s streaming capabilities and machine learning APIs. Join in to learn how to perform machine learning on Spark and explore the algorithms supported by the Spark MLlib APIs.

Each topic includes lecture content along with hands-on use of Spark through an elegant web-based notebook environment. Notebooks allow attendees to code jobs, data analysis queries, and visualizations using their own Spark cluster, accessed through a web browser. You can keep the notebooks and continue to use them with the free Databricks Community Edition offering. Alternatively, each notebook can be exported as source code and run within any Spark environment.

Outline

Spark overview

  • The DataFrames programming API
  • Spark SQL
  • The Catalyst query optimizer
  • The Tungsten in-memory data format
  • The Dataset API, encoders, and decoders
  • Use of the Spark UI to help understand DataFrame behavior and performance
  • Caching and storage levels

Spark internals

  • How Spark schedules and executes jobs and tasks
  • Shuffling, shuffle files, and performance
  • How various data sources are partitioned
  • How Spark handles data reads and writes

Graph processing with GraphFrames

Spark ML’s Pipeline API for machine learning

Spark Structured Streaming

Photo of Brooke Wenig

Brooke Wenig

Databricks

Brooke Wenig is an instructor and data science consultant for Databricks. Previously, she was a teaching associate at UCLA, where she taught graduate machine learning, senior software engineering, and introductory programming courses. Brooke also worked at Splunk and Under Armour as a KPCB fellow. She holds an MS in computer science with highest honors from UCLA with a focus on distributed machine learning. Brooke speaks Mandarin Chinese fluently and enjoys cycling.