In pursuit of speed and efficiency, big data processing is continuing its logical evolution toward columnar execution. Julien Le Dem offers a glimpse into the future of column-oriented data processing with Arrow and Parquet.
A number of key big data technologies have or will soon have in-memory columnar capabilities. This includes Kudu, Ibis, Drill and many others. Modern CPUs will achieve higher throughput using SIMD instructions and vectorization on Apache Arrow’s columnar in-memory representation. Similarly Apache Parquet will provide storage and I/O optimized columnar data access using statistics and appropriate encodings. For interoperability, row-based encodings (CSV, Thrift, Avro) combined with general-purpose compression algorithms (GZip, LZO, Snappy) are common but inefficient. Julien explains why the Arrow and Parquet Apache projects define standard columnar representations that allow interoperability without the usual cost of serialization.
This solid foundation for a shared columnar representation across the big data ecosystem promises great things for the future. Julien discusses the future of columnar data processing and the hardware trends it can take advantage of. Arrow-based interconnection between the various big data tools (SQL, UDFs, machine learning, big data frameworks, etc.) will allow using them together seamlessly and efficiently without overhead. When collocated on the same processing node, read-only shared memory and IPC avoid communication overhead; when remote, scatter-gather I/O sends the memory representation directly to the socket, avoiding serialization costs; and soon RDMA will allow exposing data remotely.
As in-memory processing becomes more popular, the traditional tiering of RAM as working space and HDD as persistent storage is outdated. More tiers, like SSDs and nonvolatile memory, are now available that provide much higher data density, achieving a latency close to RAM at a fraction of the cost. Execution engines can take advantage of more granular tiering and avoid the traditional spilling to disk which impacts performance by an order of magnitude when the working dataset does not fit in main memory.
Julien Le Dem is a principal engineer at WeWork. He’s also the coauthor of Apache Parquet and the PMC chair of the project, and he’s a committer and PMC member on Apache Pig, Apache Arrow, and a few other projects. Previously, he was an architect at Dremio; tech lead for Twitter’s data processing tools, where he also obtained a two-character Twitter handle (@J_); and a principal engineer and tech lead working on content platforms at Yahoo, where he received his Hadoop initiation. His French accent makes his talks particularly attractive.
©2016, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.