Presented By O'Reilly and Cloudera
Make Data Work
March 28–29, 2016: Training
March 29–31, 2016: Conference
San Jose, CA

SparkNet: Training deep networks in Spark

Robert Nishihara (University of California, Berkeley)
11:00am–11:40am Wednesday, 03/30/2016
Spark & Beyond

Location: LL20 A
Average rating: ****.
(4.59, 17 ratings)

Prerequisite knowledge

Attendees should have a basic knowledge of deep learning, optimization, and Spark.

Description

While there has been a lot of recent progress, deep learning presents a very different workload from what systems like Spark are optimized for. In particular, these workloads are often bottlenecked by communication. While the cost of communication between machines can be improved with better hardware, this bottleneck limits the benefit of distributed training in settings like EC2.

Robert Nishihara offers an overview of SparkNet, a system for training deep networks in Spark. Instead of building a new deep learning library in Java or Scala, SparkNet provides a framework that allows Spark users to construct deep networks using existing deep learning libraries (such as Caffe, TensorFlow, or Torch) as a backend. SparkNet gets an order of magnitude speedup from distributed training relative to Caffe on a single GPU, even in the regime in which communication is extremely expensive. Robert also discusses approaches for parallelizing stochastic gradient descent that minimize communication between machines and prevent communication from being a bottleneck.

Photo of Robert Nishihara

Robert Nishihara

University of California, Berkeley

Robert Nishihara is a fourth-year PhD student working in the University of California, Berkeley, RISELab with Michael Jordan. He works on machine learning, optimization, and artificial intelligence.