Presented By O’Reilly and Cloudera
Make Data Work
March 5–6, 2018: Training
March 6–8, 2018: Tutorials & Conference
San Jose, CA

Custom interactive visualizations and dashboards for one billion datapoints on a laptop in 30 lines of Python

James Bednar (Anaconda), Philipp Rudiger (Anaconda)
1:30pm5:00pm Tuesday, March 6, 2018
Average rating: ****.
(4.50, 2 ratings)

Who is this presentation for?

  • Analysts, scientists, engineers, journalists, and data scientists

Prerequisite knowledge

  • A working knowledge of Python and the Jupyter Notebook

Materials or downloads needed in advance

  • A laptop (Linux, MacOS, or Windows with 8+ GB of RAM) and an Anaconda or Miniconda environment installed. Please set up the environment as described at http://pyviz.org BEFORE the tutorial, as it includes some large data files used as examples.

What you'll learn

  • Learn how to build dashboards in notebooks, make data easily visualizable, plot millions or billions of datapoints in a web browser, and create a readable, maintainable, reproducible workflow

Description

Data science problems typically consist of common tasks that are repeated across many projects and situations, along with additional custom requirements that differ for each specific application. With Python, these common elements can often be handled by packages already available in the Python software ecosystem, with the data scientist simply writing custom code to stitch them together and finish the task. Particularly in the context of a Jupyter notebook, this approach can handle a wide and diverse range of tasks while requiring relatively little expertise and effort. However, it is often unclear how to select the right set of packages for a particular problem, and a variety of technical problems typically arise in practice.

As a concrete example, a very common use for a data science notebook is to take a dataset of some type, filter or process it, visualize it, and share the results with colleagues. To achieve this seemingly straightforward goal, there are very many relevant packages and even more possible combinations of those packages. The amount of code involved quickly increases as more complex problems are addressed, making the notebooks unreadable and unmaintainable. To make the notebooks maintainable, general-purpose code can be extracted and put into separate Python modules, but doing so can be very difficult because of interactions between that code and domain-specific, widget-related, and visualization-related code, all of which tend to be intermingled in Jupyter Notebook visualizations. As soon as code is extracted into separate modules, reproducibility becomes difficult because of specific dependencies of versions of the notebook on versions of external libraries, making it hard for others to run your notebooks (and for yourself at later dates). Interactive notebook-based visualizations inherit the memory limitations of web browsers and thus work well for small datasets but struggle as datasets reach millions or billions of data points. Performance of Python-based solutions can be prohibitively slow, particularly when working with large datasets, making it tempting for users to switch to less-maintainable and extremely verbose solutions using compiled languages. Sharing the final results of an analysis is often difficult with people who do not work with Python, which can often require developing a separate web application when you need to deploy the results more widely.

James Bednar and Philipp Rudiger present an overall workflow for building interactive dashboards visualizing even billions of data points interactively in a Jupyter notebook, with graphical widgets allowing control over data selection, filtering, and display options, all using only a few dozen lines of code. This workflow is based on using the following open source Python packages in a Jupyter Notebook environment:

  • HoloViews and GeoViews: Declarative specification for visualizable/plottable objects
  • Param: Declarative specification for user-modifiable parameters
  • conda: Flexible dependency tracking for building reproducible environments
  • datashader: For rendering arbitrarily large datasets faithfully as fixed-size images
  • fastparquet: For fast reading of large files into memory
  • dask: For flexibly dispatching computational tasks to cores or processors
  • Numba: For compiling array-based Python code down to fast machine code
  • Bokeh: For building visualization-based web applications flexibly from Python

James and Philipp demonstrate how to use conda to coordinate versions of all these packages, Jupyter to stitch them together, fastparquet to load the large datasets quickly, HoloViews and GeoViews to attach metadata to the data that supports automatic visualization later, Param to declare parameters and ranges of interest to the user independently of the notebook mechanisms, datashader to render the entire dataset into an image to avoid overwhelming the browser (and the user), dask to coordinate datashader’s computation across cores, Numba to accelerate this computation, Bokeh to deliver the visualization as an interactive figure, and Bokeh Server to deploy the cells as a standalone web application that can be shared with colleagues. All of these steps rely only on freely available, domain-general libraries that each do one thing very well and work well with each other. The resulting workflow can easily be retargeted for novel analyses and visualizations of other datasets serving other purposes, making it practical to develop and deploy reproducible high-performance interactive visualizations in any domain using Python.

Photo of James Bednar

James Bednar

Anaconda

James Bednar is a senior solutions architect at Anaconda. Previously, Jim was a lecturer and researcher in computational neuroscience at the University of Edinburgh, Scotland, and a software and hardware engineer at National Instruments. He manages the open source Python projects datashader, HoloViews, GeoViews, ImaGen, and Param. He has published more than 50 papers and books about the visual system, data visualization, and software development. Jim holds a PhD in computer science from the University of Texas as well as degrees in electrical engineering and philosophy.

Photo of Philipp Rudiger

Philipp Rudiger

Anaconda

Philipp Rudiger is a software developer at Anaconda, where he develops open source and client-specific software solutions for data management, visualization, and analysis. Philipp holds a PhD in computational modeling of the visual system.