Brought to you by NumFOCUS Foundation and O’Reilly Media
The official Jupyter Conference
Aug 21-22, 2018: Training
Aug 22-24, 2018: Tutorials & Conference
New York, NY

The Jupyter Notebook as a transparent way to document machine learning model development: A case study from a US defense agency

Catherine Ordun (Booz Allen Hamilton)
11:55am–12:35pm Friday, August 24, 2018
JupyterCon Business Summit, Training and education, Usage and application
Location: Concourse A: Business Summit Level: Beginner
Average rating: *****
(5.00, 5 ratings)

Who is this presentation for?

  • Data scientists, Python users, and those in business, machine learning, deep learning, or the public sector

Prerequisite knowledge

  • A basic understanding of machine learning concepts
  • Familiarity with Python

What you'll learn

  • Learn how a government agency used real-life disease data to help with disease transmission forecasting for infectious disease, using new time series forecasting techniques, with Jupyter notebooks to document the project and share results
  • Understand why machine learning models must be made transparent as much as possible

Description

Machine learning is new to many US government agencies. They need to transparently document each step of a model, from data preparation to final model prediction. One US defense agency has used the Jupyter Notebook to document its steps and show results in the model building process for a series of recurrent neural network (RNNs) algorithms. The project was so successful that the team has recommended the Jupyter Notebook to be a key component in model documentation for all government scientists.

Catherine Ordun walks you through a notebook built to test the feasibility of developing multivariate time series models to predict cases of pertussis collected weekly over a 10-year time period. These models were built in Keras with a TensorFlow backend and built in Jupyter in order to transparently show the progress of training and testing for a US defense agency technical approach. The notebook chronicles the team’s data science workflow, from data acquisition and preprocessing to neural network building to evaluation and final model selection.

The project used EpiArchive, publicly available weekly time series data from Los Alamos National Laboratory. The team used the Python requests library to call an API response from the EpiArchive database and convert the disease data for a dozen different infectious diseases into a pandas DataFrame. They also used time series weekly NOAA temperature data and precipitation data as multivariate features and converted and normalized the data. For neural network building, the team built a basic ARIMA time series model to predict weekly pertussis cases achieving a mean absolute error of 6.633, in order to establish a baseline. They then built initial LSTM and GRU models, visualizing the training and validation loss in matplotlib and using the Keras callback function to visualize on TensorBoard (outside of the Jupyter Notebook). As the team experimented with adjusting different hyperparameters and layers for the LSTM and GRU (i.e., adding dropout and changing the activation functions, optimizers, and learning rate), they arrived at a set of final models in the notebook. After several more iterations of hyperparameter tuning, they selected a nonstateful LSTM as the final model of one input layer, one layer with 10 units, and two fully connected layers. This model applied a 20% dropout layer, activation was tanh (hyperbolic tangent), and run on 100 epochs with a batch size of 20. The final model achieved a mean absolute error of 0.0896.

Photo of Catherine Ordun

Catherine Ordun

Booz Allen Hamilton

Catherine Ordun is a Washington, DC-based senior data scientist at Booz Allen Hamilton. Catherine’s background is in biology, public health, and business. A self-taught Python programmer, she has led data science work across the US government, including intelligence and public health agencies and the DoD. She serves on the Women in Data Science Committee at Booz Allen, has presented to the National Academy of Medicine, and led her team to the top three in a Health and Human Services opioid codeathon. Catherine is a two-time recipient of the Women of Color (WoC) award and is currently a program reviewer for SciPy2018. She is passionate about machine learning, has recently started participating in Kaggle challenges, and has started an internal firm-wide machine intelligence meetup.