Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. By cleanly separating the user’s processing logic from details of the underlying execution engine, the same pipelines will run on any Apache Beam runtime environment, whether it’s on-premises or in the cloud, on open source frameworks like Apache Spark or Apache Flink or on managed services like Google Cloud Dataflow.
Reuven Lax offers an overview of Beam basic concepts and demonstrates that portability in action. After introducing the capabilities of the Beam model for data processing and the current state of the Beam ecosystem, Reuven outlines the benefits Beam provides regarding portability and ease-of-use and demos the same Beam pipeline running on multiple runners in multiple deployment scenarios (e.g., Apache Flink on Google Cloud, Apache Spark on AWS, Apache Apex on-premises). Along the way, Reuven covers some of the challenges Beam aims to address in the future.
Reuven Lax is a senior staff software engineer at Google, the tech lead for cloud-based stream processing (i.e., the streaming engine behind Google Cloud Dataflow), and the former tech lead of MillWheel.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com