Organizations are increasingly looking to move their analytics and data warehouses to the cloud—not only to take advantage of the flexibility new technologies can provide but also to empower their end users with simple provisioning and instant access to data to better support a self-service BI model. But successfully transitioning analytic workloads to the cloud requires an understanding of the architectural decisions that will need to be made and the trade-offs in making them. Greg Rahn explains how to build a big data warehouse in order to maximize the full potential of the cloud, all while minimizing friction for self-service BI and analytics.
When migrating data and analytics to the cloud, you need to know when to use object storage rather than local storage, how to design for multitenant isolation, and how to tune performance for SLAs. Greg explores the workload considerations when evaluating the cloud and offers an overview of the common architectural patterns to optimize price and performance so you can answer these questions and more.
Greg Rahn is director of product management at Cloudera, where he’s responsible for driving SQL product strategy as part of the company’s data warehouse product team, including working directly with Impala. For over 20 years, Greg has worked with relational database systems in a variety of roles, including software engineering, database administration, database performance engineering, and most recently product management, providing a holistic view and expertise on the database market. Previously, Greg was part of the esteemed Real-World Performance Group at Oracle and was the first member of the product management team at Snowflake Computing.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com