Schedule: Hadoop in Action sessions
Real-world case studies of the Hadoop ecosystem in action, from disruptive startups to industry giants.
Justin Kestelyn (Cloudera)
Mark Grover (Cloudera)
1:30pm–5:00pm Wednesday, 10/15/2014
This tutorial will help you get a jump start on HBase development. We’ll start with a quick overview of HBase, the HBase data model, and architecture, and then we’ll dive directly into code to help you understand how to build HBase applications. We will also offer guidelines for good schema design, and will cover a few advanced concepts such as using HBase for transactions.
9:00am–12:30pm Wednesday, 10/15/2014
Are you looking for a deeper understanding of how to integrate components in the Apache Hadoop ecosystem to implement data management and processing solutions? Then this tutorial is for you. We'll provide a clickstream analytics example illustrating how to architect solutions with Apache Hadoop along with providing best practices and recommendations for using Hadoop and related tools.
11:00am–11:40am Thursday, 10/16/2014
Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services. Goldman executes 100's of millions of financial transactions per day, across nearly every market in the world. Learn how Goldman is harnessing knowledge, data and compute power to maintain and increase its competitive edge.
11:50am–12:30pm Thursday, 10/16/2014
Transamerica is a financial services company moving to a more customer centric model using Big Data. Our approach to this effort spans our Insurance, Annuity, and Retirement divisions. We went from a simple proof of concept to establishing Hadoop as a viable element of our enterprise data strategy. We cover core components of our solution and focus on lessons learned from our experience.
1:45pm–2:25pm Thursday, 10/16/2014
American Express is transforming for the digital age! Learn how we unleashed Big Data into our ecosystem and built on the strength of our core capabilities to remain relevant in a rapidly changing environment. New commerce opportunities and innovative products are being delivered, and the chance to provide actionable insights, social analysis, and predictive modeling is growing exponentially.
2:35pm–3:15pm Thursday, 10/16/2014
The accumulation, access and analysis of customer data (“the original Big Data”) are ingrained for L.L.Bean, which has been doing customer modeling since the 1960’s. In line with today’s omnichannel imperative, however, the retailer has embraced a “new Big Data”-driven culture—democratizing data access and tools—in order to sustain its customer-centric philosophy.
4:15pm–4:55pm Thursday, 10/16/2014
CERN, home to the Large Hadron Collider (LHC) is at the forefront of science and technology. Come to this session to learn how projects at CERN are leveraging In-memory data management and Hadoop to derive real-time insights from sensor data helping to manage the technical infrastructure of the Large Hadron Collider (LHC).
5:05pm–5:45pm Thursday, 10/16/2014
FICO has been delivering analytic solutions, such as their renowned credit scores, for nearly 60 years. Big data technologies like Hadoop promise FICO analysts the ability to build models much faster, and with greater accuracy than before, but this new generation of tools challenge them to think differently.
11:00am–11:40am Friday, 10/17/2014
LinkedIn processes enormous amounts of events each day. In this talk, you will learn the background of the data challenges that LinkedIn faced, how the teams came together to construct the solution, and the underlying stack structure powering this solution including an interactive analytics infrastructure and a self-serve data visualization frontend solution at fast scale.
11:50am–12:30pm Friday, 10/17/2014
In this panel discussion, individuals representing key stakeholders across the healthcare ecosystem will share the ways they're applying Hadoop to solve big data challenges that will ultimately improve the quality of patient care while driving better healthcare affordability.
1:45pm–2:25pm Friday, 10/17/2014
Medicine is undergoing a renaissance made possible by analyzing and creating insights from this huge and growing number of genomes. This session will showcase how ETL and MapReduce can be applied in a clinical session.
2:35pm–3:15pm Friday, 10/17/2014
Automated image processing improves efficiency for a diverse range of applications from defect detection in manufacturing to tumor detection in medical images. We’ll go beyond traditional approaches to image processing, which fail for large image datasets, by leveraging Hadoop for processing a vast number of arbitrarily large images.
4:15pm–4:55pm Friday, 10/17/2014
Netflix continues evolve its big data architecture in the cloud with performance enhancements and updated OSS offerings. We will share our experiences and selections in file formats, interactive query engines, and instance types. Genie emerges with updates to support YARN applications and we will unveil a new performance visualization tool, Inviso.
5:05pm–5:45pm Friday, 10/17/2014
Kaiser Permanente is dedicated to improving the quality of healthcare, and big data presents numerous opportunities to drive this mission forward.