JupyterHub has been successfully deployed at various high-performance computing (HPC) centers, as a means to enable users to perform large-scale, data-intensive computations in Jupyter Notebook. In this talk, we will provide a brief overview of our deployment of JupyterHub, which allows our users to use Jupyter Notebook with two large-scale computing resources: a medium-sized (2,000 node, 20,000 core) traditional HPC cluster, and a 3.64 PB, 40-node Hadoop cluster. This deployment supports a wide variety of use-cases in research and teaching at our institute. We will present several examples of these and discuss solutions to several practical problems that we encountered in supporting them:
1. Enabling parallel programming in Notebooks: e.g., MPI, GPUs and Spark
2. Environment modules and customizable shell environments in Jupyter Notebooks
3. Custom kernels for Jupyter Notebooks
4. Integrating Singularity containers and Jupyter Notebook
This talk is ideal for anyone interested in, or currently deploying JupyterHub in an HPC environment. Previous knowledge of JupyterHub is not strictly required but would be beneficial.
For exhibition and sponsorship opportunities, email jupytersponsorships@oreilly.com
For information on trade opportunities with JupyterCon, email partners@oreilly.com
View a complete list of JupyterCon contacts
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com