Skip to main content
Make Data Work
Oct 15–17, 2014 • New York, NY

Deploying and Evaluating Data Products

Josh Levy (Vast)
5:05pm–5:45pm Friday, 10/17/2014
Data Science
Location: 1 E6/1 E7
Average rating: ***..
(3.00, 1 rating)
Slides:   1-PDF 

At Vast, we’ve built models to predict price, demand, and market-relative position in order to help agents and consumers in the midst of some of life’s largest and most considered purchases – homes and automobiles. We expose these models as RESTful APIs, creating data products that are monetized through a combination of licensed access to the API, integration into partner web services, and deployment in our own branded applications.

We’ve experimented different approaches for delivering models written in Python and R to a JVM-based engineering team. We’ve tried exporting models to PMML, rewriting our code in Java or Scala, and deploying our code using yhat’s model server.

Once we improved our ability to deploy models, evaluating competing models emerged as our next bottleneck. To address that problem we have experimented with a regression environment that uses offline evaluation for direct comparison of competing models, an a/b testing environment that uses online evaluation for indirect comparisons, and a batch scoring tool that creates side-by-side comparisons for manual review.

This talk is based on personal experience. I’ll share specific problem examples along with the good, the bad and the ugly from each of our attempted solutions.

Josh Levy

Vast

Josh Levy is Senior Director, Data Science at Vast.com, where he has built personalized recommenders for homes and used vehicles. Previously he worked at Demand Media, where he developed contextual recommendations for a multimillion document corpus. He earned his Ph.D. in Computer Science from The University of North Carolina, where he researched statistical shape models and medical image segmentation,