Presented By O'Reilly and Cloudera
Make Data Work
Sept 29–Oct 1, 2015 • New York, NY

GPU/CPU acceleration for matrix computations and neural networks on Spark

Reza Zadeh (Matroid | Stanford)
9:05am–9:30am Tuesday, 09/29/2015
Hardcore Data Science
Location: 1 E10/1 E11 Level: Intermediate
Average rating: ***..
(3.33, 3 ratings)

Achieving hardware-specific acceleration through the JVM is a necessary and non-trivial component of any distributed neural network learning framework for Spark. We present the design decisions and extensive benchmarks for distributed matrix computations on Spark. Using these matrix computations, we can accelerate neural network training, and describe the current approach for deep learning on Spark.

Photo of Reza Zadeh

Reza Zadeh

Matroid | Stanford

Reza Bosagh Zadeh is founder and CEO at Matroid and an adjunct professor at Stanford University, where he teaches two PhD-level classes: Distributed Algorithms and Optimization and Discrete Mathematics and Algorithms. His work focuses on machine learning, distributed computing, and discrete applied mathematics. His awards include a KDD best paper award and the Gene Golub Outstanding Thesis Award. Reza has served on the technical advisory boards of Microsoft and Databricks. He is the initial creator of the linear algebra package in Apache Spark. Through Apache Spark, Reza’s work has been incorporated into industrial and academic cluster computing environments. Reza holds a PhD in computational mathematics from Stanford, where he worked under the supervision of Gunnar Carlsson. As part of his research, Reza built the machine learning algorithms behind Twitter’s who-to-follow system, the first product to use machine learning at Twitter.