How should you train models and serve them (score with them)? One possibility is to treat trained model as code, then run that code for scoring. This works fine if the model will never change for the lifetime of the scoring process, but it’s not ideal when long-running data streams, where you would like to retrain the model periodically (due to concept drift) and score with the new model. A better way is to treat the model as data and have this model data exchanged between the training and scoring systems, which allows updating models in the running context.
Boris Lublinsky and Dean Wampler walk you through different approaches to model training and serving that use this technique, where you make one or both functions an integrated part of the data processing pipeline implementation (i.e., as an additional functional transformation of the data). The advantage of this approach is that model serving is implemented as part of the larger data transformation pipeline. Such pipelines can be implemented either using streaming engines (e.g., Spark Streaming, Flink, or Beam) or streaming libraries (e.g., Akka Streams or Kafka Streams). Boris and Dean demonstrate example implementations using Akka Streams, Flink, and Spark Structured Streaming.
Along the way, they cover speculative execution of model serving. The advantage of this approach is the ability to provide the following features for model serving applications:
They also cover performance optimizations for model training. If training from scratch, it requires all the relevant historical data, so much more compute is required than is typically necessary for scoring. To avoid this overhead, sometimes incremental training updates can be done instead. Minibatch training has existed for a while as a technique for training models on very large datasets, independent of the notion of streaming. This technique is directly applicable to the streaming context where new data is arriving all the time. Another common approach to simplification of the model serving is to train a sophisticated model, such as a neural net, and then train a simpler model, such as a logistic regression, using the neural net as a data generator. In other words, the simpler model approximates the complex model, trading off accuracy for better scoring performance. A variation of this approach is to use both models in the speculative execution, latency-sensitive context mentioned above. You’ll also learn the advantages of separating training and serving into two different systems (more implementation flexibility and the ability to optimize training and serving independently); how to use batch or minibatch training, saving intermediate model locally to restart training; how to train on a dedicated cluster where the hardware and software are optimized for model training; and how to leverage existing, publicly available models for well-known domains like NLP, where updates to the model are actually rarely required, thereby eliminating the need to do training yourself.
Boris and Dean conclude by considering real-world production concerns like data governance for metadata, management and monitoring, reactive principles (e.g., availability requirements and how to meet them as well as how and when to scale as needed), and security.
Boris Lublinsky is a principal architect at Lightbend, where he specializes in big data, stream processing, and services. Boris has over 30 years’ experience in enterprise architecture. Previously, he was responsible for setting architectural direction, conducting architecture assessments, and creating and executing architectural road maps in fields such as big data (Hadoop-based) solutions, service-oriented architecture (SOA), business process management (BPM), and enterprise application integration (EAI). Boris is the coauthor of Applied SOA: Service-Oriented Architecture and Design Strategies, Professional Hadoop Solutions, and Serving Machine Learning Models. He’s also cofounder of and frequent speaker at several Chicago user groups.
Dean Wampler is an expert in streaming data systems, focusing on applications of machine learning and artificial intelligence (ML/AI). He’s head of developer relations at Anyscale, which is developing Ray for distributed Python, primarily for ML/AI. Previously, he was an engineering VP at Lightbend, where he led the development of Lightbend CloudFlow, an integrated system for building and running streaming data applications with Akka Streams, Apache Spark, Apache Flink, and Apache Kafka. Dean is the author of Fast Data Architectures for Streaming Applications, Programming Scala, and Functional Programming for Java Developers, and he’s the coauthor of Programming Hive, all from O’Reilly. He’s a contributor to several open source projects. A frequent conference speaker and tutorial teacher, he’s also the co-organizer of several conferences around the world and several user groups in Chicago. He earned his PhD in physics from the University of Washington.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com