Spark is the most common computing framework used for JD.com’s big data platform, which highly depends on memory resources. This can be a heavy burden to the clusters as a whole. Users usually need to configure every workload manually by increasing the memory or CPU cores of each Spark executor.
JD.com recently designed a brand-new architecture to optimize Spark computing clusters by separating the computing stage and shuffle (spill) stage into different clusters. The architecture implements the shuffle manager writing data out into the in-memory storage cluster in order to reduce the memory burden for computing cluster and uses a fast storage device to increase the memory space of the storage cluster. Yue Li and Shouwei Chen detail the problems the team faced when building it and explain how the company benefits from the in-memory distributed filesystem now. Join in to learn how the system increases memory capacity while decreasing the memory cost of each executor.
Yue Li is a cofounder at MemVerge, where together with his colleagues, he’s developing the company’s core technologies. Previously, he was a senior postdoctoral fellow at the California Institute of Technology. He has extensive research experience on both theoretical and experimental aspects of algorithms for nonvolatile memories. Yue holds a PhD in computer science from Texas A&M University and a bachelor’s degree in computer science from Huazhong University of Science & Technology.
Shouwei Chen is an ECE PhD student at Rutgers University, advised by Ivan Rodero. Shouwei’s research focuses on the codesign of a memory-centric computing framework with an in-memory distributed filesystem.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org