Presented By O'Reilly and Cloudera
Make Data Work
September 25–26, 2017: Training
September 26–28, 2017: Tutorials & Conference
New York, NY

When models go rogue: Hard-earned lessons about using machine learning in production

David Talby (Pacific AI)
5:25pm6:05pm Wednesday, September 27, 2017
Data science & advanced analytics, Machine Learning & Data Science
Location: 1A 06/07 Level: Intermediate
Average rating: *****
(5.00, 2 ratings)

Who is this presentation for?

  • Data scientists, engineering leaders, and architects

Prerequisite knowledge

  • Basic familiarity with machine learning

What you'll learn

  • Understand best practices and lessons learned that are unique to the challenges of operating machine learning-intensive systems in production

Description

Much progress has been made over the past decade on process and tooling for managing large-scale, multitier, multicloud apps and APIs, but there is far less common knowledge on best practices for managing machine-learned models (classifiers, forecasters, etc.), especially beyond the modeling, optimization, and deployment process once these models are in production.

Machine learning and data science systems often fail in production in unexpected ways. David Talby shares real-world case studies showing why this happens and explains what you can do about it, covering best practices and lessons learned from a decade of experience building and operating such systems at Fortune 500 companies across several industries.

Topics include:

  • Concept drift: Identifying and correcting for a change in the distribution of data in production, causing pretrained models to decline in accuracy
  • Selecting the right retrain pipeline for your specific problem, from automated batch retraining to online active learning
  • A/B testing challenges: Recognizing common pitfalls like the primacy and novelty effects and best practices for avoiding them (like A/A testing)
  • Offline versus online measurement: Why both are often needed and best practices for getting them right (refreshing labeled datasets, judgement guidelines, etc.)
  • Delivering semisupervised and adversarial learning systems, where most of the learning happens in production and depends on a well-designed closed feedback loop
  • The impact of all of the above of project management, planning, staffing, scheduling, and expectation setting
Photo of David Talby

David Talby

Pacific AI

David Talby is a chief technology officer at Pacific AI, helping fast-growing companies apply big data and data science techniques to solve real-world problems in healthcare, life science, and related fields. David has extensive experience in building and operating web-scale data science and business platforms, as well as building world-class, agile, distributed teams. Previously, he led business operations for Bing Shopping in the US and Europe with Microsoft’s Bing Group and built and ran distributed teams that helped scale Amazon’s financial systems with Amazon in both Seattle and the UK. David holds a PhD in computer science and master’s degrees in both computer science and business administration.