Moving from batch to streaming involves changing how we think about time. Streaming data is neither bounded nor typically well ordered in time. However, to make streaming systems useful and deliver on the promise of low-latency results, we often want to know when we have all the data relevant to emitting a correct aggregation. Watermarks provide the foundation for making such decisions, enabling streaming systems to emit timely, correct results when processing out-of-order data. On top of this foundation, triggers then provide a way to declare when outputs within a streaming pipeline should be materialized, allowing practitioners to balance the tensions between accuracy, latency, and cost to their specific use case.
Given the trend toward out-of-order processing in existing streaming systems, understanding watermarks is an increasingly important skill when designing pipelines. This methodology, first discussed in the MillWheel paper and further explored in the Dataflow model paper, is now referred to as the Beam model. This approach is not limited to just Google’s stream processing efforts; rather, it is a solution to a general problem that must be addressed by any system that wishes to provide timely out-of-order distributed stream processing and has since been pursued by others such as Flink and Qubit (which built a watermark tracking system on top of Spark Streaming for its own internal use).
Based on his experience developing and using watermarks and triggers at Google, Slava Chernyak discusses details of how watermarks and triggers are applied, as well as their strengths and limitations, and explores real-world use cases, providing a practical set of tools for understanding watermarks and time in out-of-order stream processing pipelines. Along the way, Slava also outlines some of the implementation challenges for computing watermarks with low latency in a highly distributed system and implementing triggers correctly in complex scenarios.
Slava Chernyak is a senior software engineer at Google. Slava spent over five years working on Google’s internal massive-scale streaming data processing systems and has since become involved with designing and building Google Cloud Dataflow Streaming from the ground up. Slava is passionate about making massive-scale stream processing available and useful to a broader audience. When he is not working on streaming systems, Slava is out enjoying the natural beauty of the Pacific Northwest.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.