Tuning Apache Spark is somewhat of a dark art, although thankfully when it goes wrong all we tend to lose is several hours of our day and our employers money. This talk will look at how we can go about auto-tuning selective work loads using a combination of live and historical data.
Much of the data required to effectively tune jobs is already collected inside of Spark, we just need to understand it. This talk will look at some sample auto-tuners and discuss the options for improving them and applying similar techniques in your own work.
This talk will also look at what kind of tuning can be done statically (e.g. without depending on historic information). This talk will also look at Spark own built in components for auto tunning (currently dynamically scaling cluster size) and how we can improve them.
Even if the idea of building an “auto-tuner” sounds as appealing as “using a rusty spoon to debug the JVM on a haunted super computer”, this talk will give you a better understanding of the knobs available to you to tune your Apache Spark jobs.
*Also to be clear we don’t promise to stop your pager going off at 2am, we just hope this helps.
Holden is a trans Canadian open source developer advocate with a focus on Apache Beam, Spark, and related “big data” tools. She is the co-author of Learning Spark, High Performance Spark, and another Spark book that’s a bit more out of date. She is a committer on the Apache Spark, SystemML, and Mahout projects. Prior to joining Google as a Developer Advocate she worked at IBM, Alpine, Databricks, Google (yes this is her second time), Foursquare, and Amazon. She was tricked into the world of big data while trying to improve recommendation systems and has long since forgotten her original goal. Outside of work she enjoys playing with fire, riding scooters, and dancing.
Rachel Warren is a programmer, data analyst, adventurer, and aspiring data scientist. After spending a semester helping teach algorithms and software engineering in Africa, Rachel has returned to the Bay Area, where she is looking for work as a data scientist or programmer. Previously, Rachel worked as an analyst for both Pandora and the Political Science department at Wesleyan. She is currently interested in pursuing a more technical, algorithmic, approach to data science and is particularly passionate about dynamic learning algorithms (ML) and text analysis. Rachel holds a BA in computer science from Wesleyan University, where she completed two senior projects: an application which uses machine learning and text analysis for the Computer Science department and a critical essay exploring the implications of machine learning on the analytic philosophy of language for the Philosophy department.
Anya loves her position as Senior Member of Technical Staff (SRE) at Salesforce. She’s also a co-organizer of the SF Big Analytics meetup group, and is always looking for ways to make platforms more scalable / cost efficient / secure. Before Salesforce, Anya enjoyed contributing at Alpine Data where she focused on Spark Operations. The opinions expressed in this presentation do not reflect those of Anya’s employers, past or present.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org