Big data solutions like NoSQL databases and data lakes make it easy for anyone to contribute data to the enterprise data store. Integrating previously siloed data streams with other sources across the enterprise makes it possible to uncover otherwise hidden trends, anomalies, and powerful predictors of business successes and failures.
But integrating data across silos in a large enterprise is fraught with peril. There are typically few standards on naming conventions and data representation, and documentation is spotty at best. The old rule of thumb often applies: 70% of an analyst’s time goes into gathering, understanding, cleaning, and integrating the data while only 30% goes toward the actual analyses and simulations. (For example, each silo may have its own identifiers for central concepts like customer accounts and devices, but they all may simply be named “accountId” or ”deviceId.”) Joining data by simply matching on attribute names may yield “frankendata,” a monstrous data chimera whose parts do not belong together. Analysis of frankendata will inevitably lead to misleading results and may even cause decreases rather than increases in performance and customer satisfaction. In other words: garbage in, garbage out (GIGO).
Comcast is developing Aeolus, a new internal system that acts as a single point of ingest for real-time data, and a set of data transformation and storage solutions that are designed to avoid these data integration problems. The secret? Comcast uses Apache Avro schemas for data governance across the entire architecture.
Avro schemas document and enforce the types and structures of data, and also document the meaning of each attribute. A library of core subschemas enables reuse of standard naming conventions and formats for commonly referenced data such as device and account. When core subschemas are used and the data producer refers to “deviceId,” the semantics of that field are well known and documented.
Avro schemas follow the data from the upstream to the downstream end of the architecture, from initial ingest via Apache Kafka through intermediate processing and enrichment in flight until it finally is at rest in one or more data storage options (big data lake, key-value store, time series database, etc). Even batch data ingested through ETL must be stored in Avro format. This enables Comcast to understand and integrate data at any point in its journey.
Barbara Eckman explains how Comcast is using Apache Avro for enterprise data governance, the challenges faced, and methods to address these challenges.
Barbara Eckman is a principal data architect at Comcast, where she leads data governance for an innovative, division-wide initiative comprising near-real-time ingesting, streaming, transforming, storing, and analyzing big data. Barbara is a technical innovator and strategist with internationally recognized expertise in scientific data architecture and integration. Her experience includes technical leadership positions at a Human Genome Project center, Merck, GlaxoSmithKline, and IBM. She served on the IBM Academy of Technology, an internal peer-elected organization akin to the National Academy of Sciences.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.