Achieving hardware-specific acceleration through the JVM is a necessary and non-trivial component of any distributed neural network learning framework for Spark. We present the design decisions and extensive benchmarks for distributed matrix computations on Spark. Using these matrix computations, we can accelerate neural network training, and describe the current approach for deep learning on Spark.
Reza Bosagh Zadeh is founder and CEO at Matroid and an adjunct professor at Stanford University, where he teaches two PhD-level classes: Distributed Algorithms and Optimization and Discrete Mathematics and Algorithms. His work focuses on machine learning, distributed computing, and discrete applied mathematics. His awards include a KDD best paper award and the Gene Golub Outstanding Thesis Award. Reza has served on the technical advisory boards of Microsoft and Databricks. He is the initial creator of the linear algebra package in Apache Spark. Through Apache Spark, Reza’s work has been incorporated into industrial and academic cluster computing environments. Reza holds a PhD in computational mathematics from Stanford, where he worked under the supervision of Gunnar Carlsson. As part of his research, Reza built the machine learning algorithms behind Twitter’s who-to-follow system, the first product to use machine learning at Twitter.
©2015, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.