The data science Python ecosystem (NumPy, pandas, and scikit-learn) is efficient and intuitive for advanced analytics workloads. Unfortunately, these tools are restricted to data that fits into memory and runs on a single core. Dask is a parallel computing library that complements the Python ecosystem by providing a distributed parallel framework for high-performance task scheduling.
Dask now parallelizes Python libraries like NumPy, pandas, parts of scikit-learn, and other more custom algorithms. This effort was done in collaboration with those core development communities and has led to a seamless big data experience for Python users for data analysis and complex analytics.
Matthew Rocklin discusses the basic architecture of dask, classes of applications in which it is commonly useful, and how it fits into the broader Hadoop ecosystem.
Matthew Rocklin is an open source software developer at Anaconda focusing on efficient computation and parallel computing, primarily within the Python ecosystem. He has contributed to many of the PyData libraries and today works on Dask, a framework for parallel computing. Matthew holds a PhD in computer science from the University of Chicago, where he focused on numerical linear algebra, task scheduling, and computer algebra.
Comments on this page are now closed.
©2017, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com