Presented By O’Reilly and Cloudera
Make Data Work
21–22 May 2018: Training
22–24 May 2018: Tutorials & Conference
London, UK

In-Person Training
Apache Spark programming

Monday, 21 May & Tuesday, 22 May, 9:00 - 17:00

Participants should plan to attend both days of this 2-day training course. Platinum and Training passes do not include access to tutorials on Tuesday.

The instructor walks you through the core APIs for using Spark, fundamental mechanisms and basic internals of the framework, SQL and other high-level data access tools, and Spark’s streaming capabilities and machine learning APIs.

What you'll learn, and how you can apply it

  • Understand Spark’s fundamental mechanics and Spark internals
  • Learn how to use the core Spark APIs to operate on data, build data pipelines and query large data sets using Spark SQL and DataFrames, analyze Spark jobs using the administration UIs and logs inside Databricks, and create Structured Streaming and machine learning jobs
  • Be able to articulate and implement typical use cases for Spark

This training is for you because...

  • You're a software developer, data analyst, data engineer, or data scientist who wants to use Apache Spark for machine learning and data science.

Prerequisites:

  • Experience coding in Python or Scala and using Spark
  • A basic understanding of data science topics and terminology
  • Familiarity with DataFrames (useful but not required)

Hardware and/or installation requirements:

  • A laptop with an up-to-date version of Chrome or Firefox (Internet Explorer not supported)

The instructor walks you through the core APIs for using Spark, fundamental mechanisms and basic internals of the framework, SQL and other high-level data access tools, and Spark’s streaming capabilities and machine learning APIs. Join in to learn how to perform machine learning on Spark and explore the algorithms supported by the Spark MLlib APIs.

Each topic includes lecture content along with hands-on use of Spark through an elegant web-based notebook environment. Notebooks allow attendees to code jobs, data analysis queries, and visualizations using their own Spark cluster, accessed through a web browser. You can keep the notebooks and continue to use them with the free Databricks Community Edition offering. Alternatively, each notebook can be exported as source code and run within any Spark environment.

Outline

Spark overview

  • The DataFrames programming API
  • Spark SQL
  • The Catalyst query optimizer
  • The Tungsten in-memory data format
  • The Dataset API, encoders, and decoders
  • Use of the Spark UI to help understand DataFrame behavior and performance
  • Caching and storage levels

Spark internals

  • How Spark schedules and executes jobs and tasks
  • Shuffling, shuffle files, and performance
  • How various data sources are partitioned
  • How Spark handles data reads and writes

Graph processing with GraphFrames

Spark ML’s Pipeline API for machine learning

Spark Structured Streaming

Conference registration

Get the Platinum pass or the Training pass to add this course to your package. Best Price ends 23 February.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)