Brought to you by NumFOCUS Foundation and O’Reilly Media Inc.
The official Jupyter Conference
August 22-23, 2017: Training
August 23-25, 2017: Tutorials & Conference
New York, NY
All
Reproducible research and open science
Moderated by: Cilicia Uzziel Perez
Curious students from developing countries have limited opportunities in taking part in Particle Physics research. This is due to lack of formal institutional connections that allow them free access to data and formal analysis courses. We present here, from a beginner's perspective, on how Jupyter and CERN Open Data have bridged educational gaps through easy-to-understand, interactive notebooks.
Usage and application
Maarten Breddels (Kapteyn Astronomical Institute, University of Groningen)
Maarten Breddels offers an overview of vaex, a Python library that enables calculating statistics for a billion samples per second on a regular n-dimensional grid, and ipyvolume, a library that enables volume and glyph rendering in Jupyter notebooks. Together, these libraries allow the interactive visualization and exploration of large, high-dimensional datasets in the Jupyter Notebook.
Usage and application
Diogo Munaro Vieira (globo.com), Felipe Ferreira (globo.com)
JupyterHub is an important tool for research and data driven decisions at Globo.com. Here, we show how all of Globo.com data scientists can use Jupyter Notebooks for data analysis and machine learning with no installation and configuration taking decisions that impact 50 millions of users per month.
Extensions and customization
Moderated by: Roy Hyunjin Han
Jupyter Notebook is already great, but did you know that you can use it to prototype computational web applications? In this whirlwind tour, we will introduce you to several favorite open source plugins that we have been using for the past few years (many of which we have developed) that let us rapidly deploy tools for processing tables, images, spatial data, satellite images, sounds and video.
JupyterHub deployments
Moderated by: Ashwin Trikuta Srinath, Linh Ngo, & Jeff Denton
This talk will be about how to build a JupyterHub setup with a rich set of features for interactive HPC, and solutions to practical problems encountered in integrating JupyterHub with other components of HPC systems. We will present several examples of how researchers at our institute are using JupyterHub, and demonstrate the different parts of our setup that enable their applications.
Development and community
Moderated by: Feyzi Bagirov & Tatiana Yarmola
Poor data quality frequently invalidates data analysis, especially when performed in Excel, the most commonplace business intelligence tool, on data that underwent transformations, imputations, and manual manipulations. In this talk we will use Pandas to walk through an example of Excel data analysis and illustrate several common pitfalls that make this analysis invalid.
Come enjoy delicious snacks and beverages with fellow JupyterCon attendees, speakers, and sponsors.
Extensions and customization
Daina Bouquin (Harvard-Smithsonian Center for Astrophysics), John DeBlase (Freelance)
Performing network analytics with NetworkX and Jupyter often results in difficult-to-examine hairballs rather than useful visualizations. Meanwhile, more flexible tools like SigmaJS have high learning curves for people new to JavaScript. Daina Bouquin and John DeBlase share a simple, flexible architecture that can help create beautiful JavaScript networks without ditching the Jupyter Notebook.
JupyterHub deployments
Scott Sanderson (Quantopian)
Scott Sanderson describes the architecture of the Quantopian Research Platform, a Jupyter Notebook deployment serving a community of over 100,000 users, explaining how, using standard extension mechanisms, it provides robust storage and retrieval of hundreds of gigabytes of notebooks, integrates notebooks into an existing web application, and enables sharing notebooks between users.
Extensions and customization
Ali Marami (R-Brain Inc)
JupyterLab provides a robust foundation for building flexible computational environments. Ali Marami explains how R-Brain leveraged the JupyterLab extension architecture to build a powerful IDE for data scientists, one of the few tools in the market that evenly supports R and Python in data science and includes features such as IntelliSense, debugging, and environment and data view.
Jupyter subprojects
Moderated by: Luciano Resende & Jakob Odersky
Data Scientists are becoming a necessity of every company in the data-centric world of today, and with them comes the requirement to make available a flexible and interactive analytics platform. This session will describe our experience and best practices putting together an Analytical platform based on Jupyter Notebooks, Apache Toree and Apache Spark.
Reproducible research and open science
Bernie Randles (UCLA), Catherine Zucker (Harvard University)
Although researchers have traditionally cited code and data related to their publications, they are increasingly using the Jupyter Notebook to share the processes involved in the act of scientific inquiry. Bernie Randles and Catherine Zucker explore various aspects of citing Jupyter notebooks in publications, discussing benefits, pitfalls, and best practices for creating the "paper of the future."
Keynotes
Program Chairs, Andrew Odewahn and Fernando Perez close the first day of keynotes.
Keynotes
Program Chairs, Fernando Perez and Andrew Odewahn close the second day of keynotes.
Reproducible research and open science
Mark Hahnel (Figshare), Marius Tulbure (Figshare)
Reports of a lack of reproducibility have led funders and others to require open data and code as the outputs of research they fund. Mark Hahnel and Marius Tulbure discuss the opportunities for Jupyter notebooks to be the final output of academic research, arguing that Jupyter could help disrupt the inefficiencies in cost and scale of open access academic publishing.
Usage and application
Kazunori Sato (Google)
Kazunori Sato explains how you can use Google Cloud Datalab—a Jupyter environment from Google that integrates BigQuery, TensorFlow, and other Google Cloud services seamlessly—to easily run SQL queries from Jupyter to access terabytes of data in seconds and train a deep learning model with TensorFlow with tens of GPUs in the cloud, with all the usual tools available on Jupyter.
Usage and application
yoshi NOBU Masatani (National Institute of Informatics)
Jupyter is useful for DevOps. It enables collaboration between experts and novices to accumulate infrastructure knowledge, while automation via notebooks enhances traceability and reproducibility. Yoshi Nobu Masatani shows how to combine Jupyter with Ansible for reproducible infrastructure and explores knowledge, workflow, and customer support as literate computing practices.
Reproducible research and open science
Moderated by: Paco Nathan
Paco Nathan shares lessons learned about using notebooks in media and explores computable content that combines Jupyter notebooks, video timelines, Docker containers, and HTML/JS for "last mile" presentation, covering system architectures, how to coach authors to be effective with the medium, whether live coding can augment formative assessment, and the typical barriers encountered in practice.
Extensions and customization
Moderated by: Diogo Munaro Vieira & Felipe Ferreira
At Globo.com all of our datascientists are using Jupyter Notebooks for analysis. Its analysis require some security because they are working on our shared data science platform. We will show how JupyterHub was adjusted for authentication with company's OAuth2 solution and user's action track system based on Jupyter notebook hooks.
Usage and application
Andreas Mueller (Columbia University)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
Andreas Müller walks you through a variety of real-world datasets using Jupyter notebooks together with the data analysis packages pandas, seaborn, and scikit-learn. You'll perform an initial assessment of data, deal with different data types, visualization, and preprocessing, and build predictive models for tasks such as health care and housing.
Usage and application
Laurent Gautier (Verily)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
Python is popular for data analysis, but restricting yourself to Python means missing a wealth of libraries or capabilities available in R or SQL. Laurent Gautier walks you through a pragmatic, reasonable, and good-looking polyglot approach, all thanks to R visualizations.
Usage and application
Natalino Busa (Teradata)
Jupyter notebooks are transforming the way we look at computing, coding, and science. But is this the only "data scientist experience" that this technology can provide? Natalino Busa explains how you can create interactive web applications for data exploration and analysis that in the background are still powered by the well-understood and well-documented Jupyter Notebook.
Moderated by: Greg Werner
3Blades has developed an innovative artificial intelligence agent to enhance productivity for data scientists when using Jupyter Notebooks for Exploratory Data Analysis (EDA).
Usage and application
Gunjan Baid (UC Berkeley), Vinitra Swamy (UC Berkeley)
Engaging critically with data is now a required skill for students in all areas, but many traditional data science programs aren’t easily accessible to those without prior computing experience. Gunjan Baid and Vinitra Swamy explore UC Berkeley's Data Science program—1,200 students across 50 majors—explaining how its pedagogy was designed to make data science accessible to everyone.
Usage and application
Moderated by: Sam Kennerly
Scientists, quants, and data analysts spend too much time setting up (and often making a mess of) our software environments. The problems compound when we attempt to share code. Open-source tools from Docker and Anaconda can help avoid this "dependency hell." With containers, almost anyone can interact with your Jupyter notebooks as if they were running on your own computer.
Development and community
David Taieb (IBM), Prithwish Chakraborty (IBM Watson Health), Faisal Farooq (IBM Watson Health)
David Taieb, Prithwish Chakraborty, and Faisal Farooq offer an overview of PixieDust, a new open source library that speeds data exploration with interactive autovisualizations that make creating charts easy and fun.
Kernels
Tim Gasper (Bitfusion), Pierce Spitler (Bitfusion)
Combined with GPUs, Jupyter makes for fast development and fast execution, but it is not always easy to switch from a CPU execution context to GPUs and back. Tim Gasper and Pierce Spitler share best practices on doing deep learning with Jupyter and explain how to work with CPUs and GPUs more easily by using Elastic GPUs and quick-switching between custom kernels.
Reproducible research and open science
Matt Burton (University of Pittsburgh)
While Jupyter Notebooks are a boon for computational science, they are also a powerful tool in the digital humanities. Matt Burton offers an overview of the digital humanities community, discusses defactoring—a novel use of Jupyter Notebooks to analyze computational research, and reflects upon Jupyter’s relationship to scholarly publishing and the production of knowledge.
JupyterHub deployments
Yuvi Panda (Wikimedia Foundation)
Open data by itself is not enough. Yuvi Panda explains how providing free, open, and public computational infrastructure with easy access to open data has helped people of all backgrounds should be able to easily use data however they want and why other organizations providing open data should do the same.
Reproducible research and open science
Lindsey Heagy (University of British Columbia), Rowan Cockett (3point Science)
Web-based textbooks and interactive simulations built in Jupyter notebooks provide an entry point for course participants to reproduce content they are shown and dive into the code used to build them. Lindsey Heagy and Rowan Cockett share strategies and tools for developing an educational stack that emerged from the deployment of a course on geophysics and some lessons learned along the way.
Usage and application
James Bednar (Continuum Analytics), Philipp Rudiger (Continuum Analytics)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
It can be difficult to assemble the right set of packages from the Python scientific software ecosystem to solve complex problems. James Bednar and Philipp Rudiger walk you step by step through making and deploying a concise, fast, and fully reproducible recipe for interactive visualization of millions or billions of data points using very few lines of Python in a Jupyter notebook.
JupyterHub deployments
Min Ragan-Kelley (Simula Research Laboratory), Carol Willing (Cal Poly San Luis Obispo), Yuvi Panda (Wikimedia Foundation), Ryan Lovett (Department of Statistics, UC Berkeley)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
JupyterHub, a multiuser server for Jupyter notebooks, enables you to offer a notebook server to everyone in a group—which is particularly useful when teaching a course, as students no longer need to install software on their laptops. Min Ragan-Kelley, Carol Willing, Yuvi Panda, and Ryan Lovett get you started deploying and customizing JupyterHub for your needs.
Extensions and customization
Moderated by: Joy Chakraborty
How to run Kerberize secured multi-user Jupyter notebook (JupyterHub) in a integrated with Spark/Yarn cluster and how to use docker to setup such complex integrated platform quickly with less difficulties.
Development and community
Leah Silen (NumFOCUS), Andy Terrel (NumFOCUS)
What do the discovery of the Higgs boson, the landing of the Philae robot, the analysis of political engagement, and the freedom of human trafficking victims have in common? NumFOCUS projects were there. Join Leah Silen to learn together how we can empower scientists and save humanity.
Extensions and customization
Moderated by: Steven Anton
Sometimes data scientists need to work directly with highly sensitive data, such as personally identifiable information or health records. Jupyter notebooks provide a great platform for exploration, but don't meet strict security standards. We will walk through a solution that our data science team uses to harden security by seamlessly encrypting notebooks at rest.
Usage and application
Karlijn Willems (DataCamp)
Drawing inspiration from narrative theory and design thinking and exploring real-world examples, Karlijn Willems walks you through effectively using Jupyter notebooks to guide the data journalism workflow and tackle some of the challenges that data can pose to data journalism.
Usage and application
Moderated by: Andrey Petrin
Big Data analytics is already outdated at Yandex. We need insights and action items from our logs and databases. In this new environment speed of prototyping comes to the first place. I'm going to give an overview how we use Python and Jupyter to create prototypes that amaze and inspire real product creation.
Industry Table discussions are a great way to informally network with people in similar industries or interested in the same topics.
Meet the Experts are your chance to meet face-to-face with JupyterCon presenters in a small-group setting. Drop in to discuss their sessions, ask questions, or make suggestions.
Keynotes
Fernando Perez (Lawrence Berkeley National Laboratory and UC Berkeley), Andrew Odewahn (O'Reilly Media)
Program Chairs, Fernando Perez and Andrew Odewahn open the second day of keynotes.
Extensions and customization
Matt Greenwood (Two Sigma Investments)
Matt Greenwood introduces BeakerX, a set of Jupyter Notebook extensions that enable polyglot data science, time series plotting and processing, research publication, and integration with Apache Spark. Matt reviews the Jupyter extension architecture and how BeakerX plugs into it, covers the current set of BeakerX capabilities, and discusses the pivot from Beaker, a standalone notebook, to BeakerX.
Sponsored
Peter Wang (Continuum Analytics)
Peter Wang explores open source commercial companies, offering a firsthand account of the unique challenges of building a company that is fundamentally centered around sustainable open source innovation and sharing guidelines for how to carry volunteer-based open source values forward, intentionally and thoughtfully, in a data-centric world.
Reproducible research and open science
Thorin Tabor (University of California, San Diego)
Thorin Tabor offers an overview of the GenePattern Notebook, which allows Jupyter to communicate with the open source GenePattern environment for integrative genomics analysis. It wraps hundreds of software tools for analyzing omics data types, as well as general machine learning methods, and makes them available through a user-friendly interface.
Extensions and customization
Chris Kotfila (Kitware)
Chris Kotfila offers an overview of the GeoNotebook extension to the Jupyter Notebook, which provides interactive visualization and analysis of geospatial data. Unlike other geospatial extensions to the Jupyter Notebook, GeoNotebook includes a fully integrated tile server providing easy visualization of vector and raster data formats.
Usage and application
Christopher Wilcox (Microsoft)
Have you thought about what it takes to host 500+ Jupyter users concurrently? What about managing 17,000+ users and their content? Christopher Wilcox explains how Azure Notebooks does this daily and discusses the challenges faced in designing and building a scalable Jupyter service.
Usage and application
Moderated by: Douglas Liming
Ready to take a deeper look at how the Jupyter platform is having a widespread impact on analytics? Learn how a large health organization was able to fit SAS their open ecosystem, and thanks to the Jupyter platform, you no longer have to choose between analytics languages like Python, R, or SAS, and how a single, unified open analytics platform supported by Jupyter empowers you to have it all.
Usage and application
Moderated by: Chris Rawles
The availability of data combined with new analytical tools have fundamentally transformed the sports industry, and in this talk I show how to use Jupyter Notebook with powerful analytical tools such as Apache Spark and visualization tools like Matplotlib and Seaborn to assist data science.
Reproducible research and open science
Zach Sailer (University of Oregon)
Scientific research thrives on collaborations between computational and experimental groups who work together to solve problems using their separate expertise. Zach Sailer highlights how tools like the Notebook, JupyterHub, and ipywidgets can be used to make these collaborations smoother and more effective.
Keynotes
Rachel Thomas (fast.ai)
A class of machine learning algorithms called deep learning is achieving state-of-the-art results across many fields. Although some people claim you must start with advanced math to use deep learning, we found that the best way for any coder to get started is with code. We used Jupyter notebooks to provide an environment that encourages students to learn deep learning through experimentation.
JupyterHub deployments
Moderated by: Dave Goodsmith
At DataScience.com, we've championed Jupyter as the foundation for our Cloud primarily because it provides seamless communication between data scientists and the end users of their models. Through our work with the NSF Hubs https://www.datascience.com/resources/videos/the-science-of-data-driven-storytelling we've researched best practices and we'll explain those and how our road led to Jupyter.
JupyterHub deployments
Shreyas Cholia (Lawrence Berkeley National Laboratory), Rollin Thomas (Lawrence Berkeley National Laboratory), Shane Canon (Lawrence Berkeley National Laboratory)
Shreyas Cholia, Rollin Thomas, and Shane Canon share their experience leveraging Jupyterhub to enable notebook services for data-intensive supercomputing on the Cray XC40 Cori system at the National Energy Research Scientific Computing Center (NERSC).
Core architecture
Safia Abdalla (nteract)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
Have you wondered what it takes to go from a Jupyter user to a Jupyter pro? Wonder no more. Safia Abdalla explores the core concepts of the Jupyter ecosystem, including the extensions ecosystem, the kernel ecosystem, and the frontend architecture, leaving you with an understanding of the possibilities of the Jupyter ecosystem and practical skills on customizing the Jupyter Notebook experience.
Usage and application
Paco Nathan (O'Reilly Media)
Paco Nathan reviews use cases where Jupyter provides a frontend to AI as the means for keeping humans in the loop. This process enhances the feedback loop between people and machines, and the end result is that a smaller group of people can handle a wider range of responsibilities for building and maintaining a complex system of automation.
Usage and application
Srinivas Sunkara (Bloomberg LP), Cheryl Quah (Bloomberg LP)
Strong partnerships between the open source community and industry have driven many recent developments in Jupyter. Srinivas Sunkara and Cheryl Quah discuss the results of some of these collaborations, including JupyterLab, bqplot, and enhancements to ipywidgets that greatly enrich Jupyter as an environment for data science and quantitative financial research.
Moderated by: Patrick Huck & Shreyas Cholia
The open Materials Project (MP, https://materialsproject.org) that supports the design of novel materials, now allows users to contribute and share new theoretical and experimental materials data via the MPContribs tool. MPContribs uses Jupyter and JupyterHub at every layer and is an important step in MP’s effort to deliver a next-generation collaborative platform for Materials (Data) Science.
Usage and application
Aaron Kramer (DataScience.com)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
Modern natural language processing (NLP) workflows often require interoperability between multiple tools. Aaron Kramer offers an introduction to interactive NLP with SpaCy within the Jupyter Notebook, covering core NLP concepts, core workflows in SpaCy, and examples of interacting with other tools like TensorFlow, NetworkX, LIME, and others as part of interactive NLP projects.
Usage and application
Moderated by: Harold Mitchell
Today's healthcare and research professionals have so much precious historical data in need of a predictive outcome. Wouldn't it be nice to carry around a web-based notebook that had built‐in algorithms to perform predictions? Even more, the built‐in algorithms would be built by and maintained by you.
Keynotes, Sponsored Keynote
Peter Wang (Continuum Analytics)
Open source has emerged as a valuable player in the enterprise in recent years. Companies like Jupyter and Anaconda are leading the way. Hear CTO and co-founder of Continuum Analytics Peter Wang discuss the co-evolution of these two major players in the new Open Data Science ecosystem and next steps to a sustainable future.
Usage and application
R.Stuart Geiger (UC Berkeley Institute for Data Science), Brittany Fiore-Gartland (eScience Institute | Department of Human Centered Design and Engineering, University of Washington), Charlotte Cabasse-Mazel (UC Berkeley Institute for Data Science)
The concept of rituals is useful for thinking about how the core technology of Jupyter notebooks is extended through other tools, platforms, and practices. R. Stuart Geiger, Brittany Fiore-Gartland, and Charlotte Cabasse-Mazel share ethnographic findings about various rituals performed with Jupyter notebooks.
Reproducible research and open science
Moderated by: Agostino De Marco
The audience will be guided through all the phases of the typical workflow that is required to arrange and launch a set of Monte Carlo simulations of flight trajectories. We are going to present some realistic and interesting scenarios of flight simulations, such as automatic landing in steady atmosphere and wind turbine wake encounters in crosswind conditions.
Development and community
Kyle Kelley (Netflix)
So, Netflix's data scientists and engineers. . .do they know things? Join Kyle Kelley to find out. Kyle explores how Netflix uses Jupyter and explains how you can learn from Netflix's experience to enable analysts at your organization.
This session will be given by a member of the core Jupyter team. More details to come.
Usage and application
Andrew Therriault (City of Boston)
Jupyter notebooks are a great tool for exploratory analysis and early development, but what do you do when it's time to move to production? A few years ago, the obvious answer was to export to a pure Python script, but now there are other options. Andrew Therriault dives into real-world cases to explore alternatives for integrating Jupyter into production workflows.
Moderated by: Jacob Frias Koehler
Here, we present an undergraduate mathematics curriculum that leverages the Jupyter notebook and Jupyterhub to deliver material content and serve as the computational platform for students. These materials are motivated by introductory classes typically labeled Quantitative Reasoning, PreCalculus, and Calculus I.
Usage and application
Moderated by: Laxmikanth Malladi
Spinning up Jupyter on AWS is easy with many references for deploying on EC2 and EMR. This session intends to provide additional configurations and patterns for Enterprises to govern, track and audit usage on AWS.
Posters will be presented Wednesday evening in a networking setting where attendees can mingle with the presenters to discuss their Jupyter work one-on-one.
Jupyter subprojects
Sylvain Corlay (QuantStack)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
With Jupyter widgets, you can build user interfaces with graphical controls inside a Jupyter notebook, documentation, and web pages. Jupyter widgets also provide a framework for building custom controls. Sylvain Corlay demonstrates how to use Jupyter widgets effectively for interactive computing, explores the ecosystem of custom controls, and walks you through building your own control.
JupyterHub deployments
Moderated by: Jeffrey Denton
It is a match made in the cloud. By marrying JupyterHub and CloudyCluster, users gain access to scalable Jupyter without the headache and overhead of operations. Learn how CloudyCluster can scale JupyterHub to support thousands of users and thousands of computers, all from your smartphone, tablet, or desktop device.
Core architecture
Min Ragan-Kelley (Simula Research Laboratory), Carol Willing (Cal Poly San Luis Obispo)
JupyterHub is a multiuser server for Jupyter notebooks. Min Ragan-Kelley and Carol Willing discuss exciting recent additions and future plans for the project, including sharing notebooks with students and collaborators.
This session will be given by a member of the core Jupyter team. More details to come.
Core architecture
Steven Silvester (Continuum Analytics), Jason Grout (Bloomberg)
Tutorial Please note: to attend, your registration must include Tutorials on Wednesday.
Steven Silvester and Jason Grout lead a walkthrough of JupyterLab as a user and as an extension author, explore the capabilities of JupyterLab, and a offer a demonstration of how to create a simple extension to the environment.
Keynotes
Brett Cannon (Microsoft | Python Software Foundation)
Details to come.
Keynotes
Demba Ba (Harvard University)
Details to come.
Keynotes
Fernando Perez (Lawrence Berkeley National Laboratory and UC Berkeley)
Details to come.
Keynotes
Jeremy Freeman (Chan-Zuckerberg Initiative)
Details to come.
Keynotes
Lorena Barba (George Washington University)
Details to come.
Keynotes
Nadia Eghbal (GitHub)
Details to come.
Keynotes
Wes McKinney (Two Sigma Investments)
Details to come.
Keynotes
Two keynotes to come.
Keynotes
Details to come.
Keynotes
Details to come.
Keynotes
Details to come.
Keynotes
Two keynotes to come.
The advent of many interdisciplinary research areas and the cooperation of different scientific fields demand computational systems that allow for efficient collaboration. Kooplex, our highly integrated system incorporating the advantages of Jupyter notebooks, public dashboards, version control and data sharing serves as a basis for different projects in fields ranging from Medicine to Physics.
Development and community
Kari Jordan (Data Carpentry)
Diversity can be achieved through sharing information among members of a community. Jupyter prides itself on being a community of dynamic developers, cutting-edge scientists, and everyday users, but is our platform being shared with diverse populations? Kari Jordan explains how training has the potential to improve diversity and drive usage of Jupyter notebooks in broader communities.
Reproducible research and open science
Megan Risdal (Kaggle), Wendy Chih-wen Kan (Kaggle)
Kaggle Kernels, an in-browser code execution environment that includes a version of Jupyter Notebooks, has allowed Kaggle to flourish in new ways. Drawing on a diverse repository of user-created notebooks paired with competitions and public datasets, Megan Risdal and Wendy Chih-wen Kan explain how Kernels has impacted machine learning trends, collaborative data science, and learning.
Usage and application
Christine Doig (Continuum Analytics), Fabio Pliger (Continuum Analytics)
Christine Doig and Fabio Pliger explain how they built a commercial product on top Jupyter to help Excel users access the capabilities of the rich data science Python ecosystem and share examples and use cases from a variety of industries that illustrate the collaborative workflow between analysts and data scientists that the application has enabled.
Robert Schroll (The Data Incubator)
2-Day Training Please note: to attend, your registration must include Training courses.
Robert Schroll introduces TensorFlow's capabilities through its Python interface with a series of Jupyter notebooks, moving from building machine learning algorithms piece by piece to using the higher-level abstractions provided by TensorFlow. You'll then use this knowledge to build and visualize machine learning models on real-world data.
JupyterHub deployments
Ryan Lovett (Department of Statistics, UC Berkeley), Yuvi Panda (Wikimedia Foundation)
The UC Berkeley Data Science Education program uses Jupyter notebooks on a JupyterHub. Ryan Lovett and Yuvi Panda outline the DevOps principles that keep the largest reported educational hub (with 1,000+ users) stable and performant while enabling all the features instructors and students require.
JupyterHub deployments
Saranga Komanduri (Civis Analytics), Lori Eich (Civis Analytics)
The product and engineering teams of Civis Analytics integrated Jupyter notebooks into our cloud-based platform, providing the ability to run multiple notebooks concurrently and share them. We'll present what we learned about notebook users and their user stories, and the various technical challenges we encountered. You'll hear from both engineering and product as we co-present our approaches.
Documentation
Carol Willing (Cal Poly San Luis Obispo)
Music engages and delights. Carol Willing explains how to explore and teach the basics of interactive computing and data science by combining music with Jupyter notebooks, using music21, a tool for computer-aided musicology, and Magenta, a TensorFlow project for making music with machine learning, to create collaborative narratives and publishing materials for teaching and learning.
Usage and application
Patty Ryan (Microsoft), Lee Stott (Microsoft), Michael Lanzetta (Microsoft)
Patty Ryan, Lee Stott, and Michael Lanzetta explore four industry examples of Jupyter notebooks that illustrate innovative applications of machine learning in manufacturing, retail, services, and education and share four reference industry Jupyter notebooks (available in both Python and R)—along with demo datasets—for practical application to your specific industry value areas.
Reproducible research and open science
Moderated by: Alexandr Notchenko
It's obvious that jupyter notebooks are a great tool for someone who wants to perform data analysis, make an argument and build beautiful visualisations to support it. I want to talk about what's less obvious - some new developments around notebooks enable it to embody iterative, empirical, model building process of scientific discovery which also satisfies most of criterias of Science.
Reproducible research and open science
Hilary Parker (Stitch Fix)
Traditionally, statistical training has focused on statistical methods and tests, without addressing the process of developing a technical artifact, such as a report. Hilary Parker argues that it's critical to teach students how to go about developing an analysis so they avoid common pitfalls and explains why we must adopt a blameless postmortem culture to address these pitfalls as they occur.
Reproducible research and open science
Daniel Mietchen (University of Virginia)
Jupyter notebooks are a popular option for sharing data science workflows. Daniel Mietchen shares best practices for reproducibility and other aspects of usability (documentation, ease of reuse, etc.) gleaned from analyzing Jupyter notebooks referenced in PubMed Central, a project that started at a hackathon earlier this year and is still ongoing and is being documented on GitHub.
Christian Moscardi (The Data Incubator)
2-Day Training Please note: to attend, your registration must include Training courses.
Christian Moscardi walks you through developing a machine learning pipeline, from prototyping to production, with the Jupyter platform, exploring data cleaning, feature engineering, model building and evaluation, and deployment in an industry-focused setting. Along the way, you'll learn Jupyter best practices and the Jupyter settings and libraries that enable great visualizations.
Usage and application
Moderated by: Aaron Goldenberg
Portfolio selection and optimization techniques using Google's TensorFlow
Usage and application
Moderated by: Bill Walrond
In this presentation, Kevin Rasmussen, Solution Architect, Caserta Concepts, discusses why notebooks aren’t just for data scientists anymore. Drawing information from a current project with one of the most respected newspapers in the country, he will go into detail about how to put data engineering into production with notebooks.
Usage and application
Moderated by: Jonathan Whitmore
Project Jupyter contains tools that are perfect for many data science tasks, including rapid iteration for data munging, visualizing, and creating a beautiful presentation of results. The same tools that give power to individual data scientists can prove challenging to integrate in a team setting. This talk will emphasize overall best practices for data science team productivity.
Moderated by: David P. Sanders (Department of Physics, Faculty of Sciences, National University of Mexico)
An overview of using Julia with the Jupyter notebook, showing how the flexibility of the language is reflected in the notebook environment.
Jupyter subprojects
Ian Rose (UC Berkeley)
Ian Rose shares recent work on allowing for real-time collaboration in Jupyter notebooks, including installation, usage, and design decisions.
Reproducible research and open science
Moderated by: Eduardo Arino de la Rubia
Jupyter has ignited enthusiasm about reproducible research. To fulfill the promise of this concept, it’s insufficient to have results simply captured inline with one’s code. Changing data sets, environment factors (e.g., packages), and versioning of notebooks themselves are all challenges. We describe a solution to achieve deep reproducibility using Jupyter, Docker, and version control.
Kernels
Alexandre Archambault explores why an official Scala kernel for Jupyter has yet to emerge. Part of the answer lies in the fact that there is no user-friendly, easy-to-use Scala shell in the console (i.e., no IPython for Scala). But there's a new contender, Ammonite—although it still has to overcome a few challenges, not least being supporting by big data frameworks like Spark, Scio, and Scalding.
Reproducible research and open science
Moderated by: Majid Khorrami & Laura Kahn
What if decision makers could use data science techniques to predict how much economic aid they would receive each year? Our proposal will show how we did just that and used data for social good.
Gather before keynotes on Thursday and Friday morning for a speed networking event. Enjoy casual conversation while meeting new attendees.
Gather before keynotes on Thursday and Friday morning for a speed networking event. Enjoy casual conversation while meeting new attendees.
Kernels
Moderated by: Marius van Niekerk
Spylon kernel is a pure python jupyter metakernel. This allows python and scala users to have an easy kernel to use with Apache Spark.
Jupyter subprojects
Christian Moscardi (The Data Incubator)
Christian Moscardi shares the practical solutions developed at the Data Incubator for using Jupyter notebooks for education. Christian explores some of the open source Jupyter extensions he has written to improve the learning experience as well as tools to clean notebooks before they are committed to version control.
Usage and application
Moderated by: Joshua Cook
This teaching session will take participants through using Docker's suite of tools, the numpy/scipy ecosystem, and the Jupyter project as a feature-rich programming interface, to build powerful systems for performing rich analysis and transformation on data sets of any size.
Reproducible research and open science
The DOE Systems Biology Knowledgebase (KBase) is an open source project that enables biological scientists to create, execute, collaborate on and share reproducible analysis workflows. KBase's Narrative Interface, built on the Jupyter Notebook, is the front end to a scalable object store, an execution engine, a distributed compute cluster, and a library of analysis tools packaged as Docker images.
Industry Table discussions are a great way to informally network with people in similar industries or interested in the same topics.
Keynotes
Thursday Keynotes
Meet the Experts are your chance to meet face-to-face with JupyterCon presenters in a small-group setting. Drop in to discuss their sessions, ask questions, or make suggestions.
Keynotes
Andrew Odewahn (O'Reilly Media), Fernando Perez (Lawrence Berkeley National Laboratory and UC Berkeley)
Program chairs Andrew Odewahn and Fernando Perez open the first day of keynotes.
Usage and application
Marc Colangelo (Zymergen), Justin Nand (Zymergen), Danielle Chou (Zymergen)
Zymergen approaches biology with an engineering and data-driven mindset. Its platform integrates robotics, software, and biology to deliver predictability and reliability during strain design and development. Marc Colangelo, Justin Nand, and Danielle Chou explain the integral role Jupyter notebooks play in providing a shared Python environment between Zymergen's software engineers and scientists.
Development and community
Moderated by: Timothy Dobbins
SQLCell is a magic function that executes raw, parallel, parameterized SQL queries with the ability to accept python variables as parameters, switch between engines with a button click, run outside of a transaction block, produce an intuitive query plan graph with D3.js to highlight slow points in query; all while concurrently running Python code. And much more.
Development and community
Moderated by: Jason Kuruzovich
FreeCodeCamp.com is a online learning platform for coding that has figured out how to use distributed content creation to power a learning community. This talk will discuss FreeCodeCamp and detail my current efforts to start a similar model for analytics with the AnalyticsDojo.com:, including content, technical, and community related opportunities and challenges.
Extensions and customization
Andreas Mueller (Columbia University)
The Jupyter Notebook can combine narrative, code, and graphics—the ideal combination for teaching anything programming related. That's why Andreas Müller chose to write his book, Introduction to Machine Learning with Python, in a Jupyter notebook. However, going from notebook to book was not easy. Andreas shares challenges and tricks for converting notebooks for print.
Kernels
Sylvain Corlay (QuantStack), Johan Mabille (QuantStack)
Xeus takes on the burden of implementing the Jupyter kernel protocol so that kernel authors can focus on more easily implementing the language-specific part of the kernel and support features, such as autocomplete or interactive widgets. Sylvain Corlay and Johan Mabille showcase a new C++ kernel based on the Cling interpreter built with xeus.