Presented By O'Reilly and Cloudera
Make Data Work
31 May–1 June 2016: Training
1 June–3 June 2016: Conference
London, UK

Schedule: Hadoop use cases sessions

12:05–12:45 Thursday, 2/06/2016
Location: Capital Suite 10/11 Level: Non-technical
Dan Jermyn (Royal Bank of Scotland), Connor Carreras (Trifacta)
Average rating: ***..
(3.90, 10 ratings)
Big data provides an unprecedented opportunity to really understand and engage with your customers, but only if you have the keys to unlock the value in the data. Through examples from the Royal Bank of Scotland, Dan Jermyn and Connor Carreras explain how to use data wrangling to harness the power of data stored on Hadoop and deliver personalized interactions to increase customer satisfaction. Read more.
14:05–14:45 Thursday, 2/06/2016
Location: Capital Suite 10/11 Level: Intermediate
Fergal Toomey (Corvil), Pierre Lacave (Corvil Ltd.)
Average rating: **...
(2.14, 21 ratings)
Fergal Toomey and Pierre Lacave demonstrate how to effectively use Spark and Hadoop to reliably analyze data in high-speed trading environments across multiple machines in real time. Read more.
14:55–15:35 Thursday, 2/06/2016
Location: Capital Suite 10/11 Level: Intermediate
Deenar Toraskar (Think Reactive)
Average rating: ***..
(3.00, 2 ratings)
Value at risk (VaR) is a widely used risk measure. VaR is not simply additive, which provides unique challenges to report VaR at any aggregate level, as traditional database aggregation functions don't work. Deenar Toraskar explains how the Hive complex data types and user-defined functions can be used very effectively to provide simple, fast, and flexible VaR aggregation. Read more.
16:35–17:15 Thursday, 2/06/2016
Location: Capital Suite 10/11 Level: Intermediate
Ben Sharma (Zaloni)
Average rating: **...
(2.00, 3 ratings)
Risk data aggregation and risk reporting (RDARR) is critical to compliance in financial services. Big data expert Ben Sharma explores multiple use cases to demonstrate how organizations in the financial services industry are building big data lakes that deliver the necessary components for risk data aggregation and risk reporting. Read more.
11:15–11:55 Friday, 3/06/2016
Location: Capital Suite 10/11 Level: Intermediate
Steven Noels (NGDATA)
Average rating: ***..
(3.50, 10 ratings)
Steven Noels explains how to prime the Hadoop ecosystem for real-time data analysis and actionability, examining ways to evolve from batch processing to real-time stream-based processing. Read more.
12:05–12:45 Friday, 3/06/2016
Location: Capital Suite 10/11 Level: Intermediate
Thomas Beer (Continental), Felix Werkmeister (Continental)
Average rating: ***..
(3.00, 2 ratings)
Experience tells us a decision is only as good as the information it is based on. The same is true for driving. The better a vehicle knows its surroundings, the better it can support the driver. Information makes vehicles safer, more efficient, and more comfortable. Thomas Beer and Felix Werkmeister explain how Continental exploits big data technologies for building information-driven vehicles. Read more.
14:55–15:35 Friday, 3/06/2016
Location: Capital Suite 14 Level: Intermediate
Tomer Shiran (Dremio)
Average rating: ****.
(4.00, 5 ratings)
Modern data is often messy and does not fit into the old schema-on-write or even the newer schema-on-read paradigms. Some data effectively has no schema at all. Tomer Shiran explores how to analyze such data with Drill, covering Drill’s internal architecture and explaining how type introspection can be used to query JSON and JSON-structured data—such as data in MongoDB—without requiring a schema. Read more.