Presented By O’Reilly and Cloudera
Make Data Work
September 11, 2018: Training & Tutorials
September 12–13, 2018: Keynotes & Sessions
New York, NY

Making interactive browser-based visualizations easy in Python

James Bednar (Anaconda)
9:00am–12:30pm Tuesday, 09/11/2018
Visualization and user experience
Location: 1E 09 Level: Intermediate
Average rating: ****.
(4.60, 5 ratings)

Who is this presentation for?

  • Analysts, engineers, developers, journalists, and data scientists with basic knowledge of Python

Prerequisite knowledge

  • Basic knowledge of Python
  • Familiarity with the Jupyter Notebook

Materials or downloads needed in advance

What you'll learn

  • Learn how to make data easily visualizable, build dashboards in notebooks, plot millions or billions of datapoints in a web browser, and create a readable, maintainable, reproducible workflow

Description

Solving data science problems involves some tasks common to many projects and situations, plus custom requirements that differ for each specific application. With Python, these common elements can often be handled by packages already available in the Python software ecosystem. The data scientist then simply writes custom code (typically in a Jupyter notebook) to stitch them together and finish the task. This approach can handle a wide range of problems while requiring relatively little software development skill or effort. However, it is often unclear how to select the right set of packages for a particular problem, and a variety of technical issues typically arise in practice.

As a concrete example, a very common use for a data science notebook is to take a dataset of some type, filter or process it, visualize it, and share the results with colleagues. To achieve this seemingly straightforward goal, there are very many packages that might be relevant and even more possible combinations of those packages, each of which can present various practical problems that are daunting to overcome:

  1. The amount of code involved quickly increases as more complex problems are addressed, making the notebooks unreadable and unmaintainable.
  2. To make the notebooks maintainable, general-purpose code can be extracted and put into separate Python modules, but doing so can be very difficult because of interactions between that code and domain-specific, widget-related, and visualization-related code, all of which tend to be intermingled in Jupyter Notebook visualizations.
  3. As soon as code is extracted into separate modules, reproducibility becomes difficult because of specific dependencies of versions of the notebook on versions of external libraries, making it hard for others to run your notebooks (and for yourself at later dates).
  4. Interactive notebook-based visualizations inherit the memory limitations of web browsers and thus work well for small datasets but struggle as datasets reach millions or billions of data points.
  5. The performance of Python-based solutions can be prohibitively slow, particularly when working with large datasets, making it tempting for users to switch to less-maintainable and extremely verbose solutions using compiled languages.
  6. Sharing the final results of an analysis is often difficult with people who do not work with Python, which can often require developing a separate web application when you need to deploy the results more widely.

The new PyViz.org initiative is designed to eliminate these difficulties by streamlining differences and incompatibilities between many of the packages, providing additional functionality where necessary to optimize key steps and providing a comprehensive set of examples and tutorials that show how to put the packages together into solutions for real problems.

James Bednar guides you through an overall workflow for building interactive notebooks and dashboards visualizing even billions of data points interactively, with graphical widgets allowing custom control over data selection, filtering, and display options—all using only a few dozen lines of code. James also demonstrates how the same approach can be used to make it simple to work with live streaming data, complex custom interactivity, very-high-dimensional datasets, and geographic data. This workflow is based on using the following open source Python packages in a Jupyter Notebook environment, each labeled with the problem(s) it addresses from the above list:

  • HoloViews and GeoViews: Declarative specification for visualizable and plottable objects (1)
  • Param: Declarative specification for user-modifiable parameters (1, 2)
  • Conda: Flexible dependency tracking for building reproducible environments (3)
  • Datashader: Rendering arbitrarily large datasets faithfully as fixed-size images (4)
  • fastparquet: Fast reading of large columnar datasets into memory (5)
  • Dask: Flexibly dispatching computational tasks to cores or processors (5)
  • Numba: Compiling array-based Python code down to fast machine code (5)
  • Bokeh: Building visualization-based web applications flexibly from Python (6)

James demonstrates how to use Conda to coordinate versions of all these packages, Jupyter to stitch them together, fastparquet to load the large datasets quickly, HoloViews and GeoViews to attach metadata to the data that supports automatic visualization later, Param to declare parameters and ranges of interest to the user independently of the notebook mechanisms that will later become widgets automatically, Datashader to render the entire dataset into an image to avoid overwhelming the browser (and the user), Dask to coordinate Datashader’s computation across cores, Numba to accelerate this computation, Bokeh to deliver the visualization as an interactive figure, and Bokeh Server to deploy the cells as a standalone web application that can be shared with colleagues. All of these steps rely only on freely available, domain-general libraries that each do one thing very well and are designed to work well with each other. The resulting workflow can easily be retargeted for novel analyses and visualizations of other datasets serving other purposes, making it practical to develop and deploy concise, reproducible high-performance interactive visualizations in any domain using Python.

Photo of James Bednar

James Bednar

Anaconda

James Bednar is a senior solutions architect at Anaconda. Previously, Jim was a lecturer and researcher in computational neuroscience at the University of Edinburgh, Scotland, and a software and hardware engineer at National Instruments. He manages the open source Python projects datashader, HoloViews, GeoViews, ImaGen, and Param. He has published more than 50 papers and books about the visual system, data visualization, and software development. Jim holds a PhD in computer science from the University of Texas as well as degrees in electrical engineering and philosophy.