Mar 15–18, 2020

Model Debugging Strategies

Patrick Hall (H2O.ai | George Washington University)
1:30pm5:00pm Monday, March 16, 2020
Location: LL21 E/F

Who is this presentation for?

Data scientists or analysts

Level

Intermediate

Description

You used cross-validation, early stopping, grid search, monotonicity constraints, and regularization to train a generalizable, interpretable, and stable model. It’s lift, area under the curve (AUC), and other fit statistics look just fine on out-of-time test data, and better than the linear model it’s replacing. You selected your cutoff judiciously and even used automatic code generation to create a real-time scoring engine. So, it’s time to deploy?

No. Unfortunately, current best practices for machine learning (ML) model training and assessment can be insufficient for high-stakes, real-world ML systems. Much like other complex information technology systems, ML models need to be debugged for logical or run-time errors and for security vulnerabilities. Recent, high-profile failures have made it clear that ML models must also be debugged for disparate impact across demographic segments and other types of unwanted sociological bias.

This presentation introduces model debugging and systematic debugging and remediation strategies for ML. Model debugging is an emergent discipline focused on discovering and remediating errors in the internal mechanisms and outputs of ML models. Model debugging attempts to test ML models like code (because they are usually code). Model debugging enhances trust in ML directly by increasing accuracy in new or holdout data, by decreasing or identifying hackable attack surfaces, or by decreasing sociological bias. As a side-effect, model debugging should also increase understanding and interpretability of model mechanisms and predictions.

Presented debugging strategies include:

- Sensitivity analysis and variants: out-of-range and residual partial dependence, individual conditional expectation, adversarial examples, and random attacks
- Residual analysis and variants: disparate impact and error analysis and post-hoc explanation of residuals
- Benchmark models
- White-hat hacks on ML

Presented remediation strategies include:

- Anomaly detection
- Model assertions
- Model editing
- Model monitoring
- Noise injection
- Strong regularization

Want a sneak peak of the strategies? Check out these open resources.

Prerequisite knowledge

Attendees should have a working knowledge of tree-based ensemble models, linear models, and Python.

Materials or downloads needed in advance

This tutorial will be hosted in the H2O educational cloud, Aquarium: http://aquarium.h2o.ai. All users will need is an email address and their laptop. Tutorials materials are kept open for review and suggestions on GitHub: https://github.com/jphall663/interpretable_machine_learning_with_python.

What you'll learn

Learn strategies to test and fix security vulnerabilities, unwanted sociological biases, and hidden errors in your machine learning systems.
Photo of Patrick Hall

Patrick Hall

H2O.ai | George Washington University

Patrick Hall is a senior director for data science products at H2O.ai, where he focuses mainly on increasing trust and understanding in machine learning. Patrick is also an awarded lecturer in the Department of Decision Sciences at George Washington University, the lead author of the e-booklet “An Introduction to Machine Learning Interpretability”, and a member of several multi-institutional working groups pursuing viable technical solutions to the complex challenges presented by high-stakes applications of artificial intelligence.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires