The data science Python ecosystem (NumPy, pandas, and scikit-learn) is efficient and intuitive for advanced analytics workloads. Unfortunately, these tools are restricted to data that fits into memory and runs on a single core. Dask is a parallel computing library that complements the Python ecosystem by providing a distributed parallel framework for high-performance task scheduling.
Dask parallelizes Python libraries like NumPy and pandas and integrates with popular machine learning libraries like scikit-learn, XGBoost, and TensorFlow. This effort, done in collaboration with existing Python development communities, provides a seamless big data experience for Python users for data analysis and complex analytics. These parallel libraries are all backed by the same flexible task scheduler. This task scheduler is also part of the public API and is commonly used independently by companies to build complex and reactive distributed systems for bespoke applications that fall outside of the typical use cases for more traditional distributed systems like Spark or Flink.
Matthew Rocklin discusses the basic architecture of Dask, classes of applications in which it is commonly useful, and how it fits into the broader Hadoop ecosystem.
Matthew Rocklin is an open source software developer at Anaconda focusing on efficient computation and parallel computing, primarily within the Python ecosystem. He has contributed to many of the PyData libraries and today works on Dask, a framework for parallel computing. Matthew holds a PhD in computer science from the University of Chicago, where he focused on numerical linear algebra, task scheduling, and computer algebra.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com