Presented By
O’Reilly + Cloudera
Make Data Work
29 April–2 May 2019
London, UK

2-Day Training Courses

All training courses take place 9:00 - 17:00, Monday, 29 April through Tuesday, 30 April. In order to maintain a high level of hands-on learning and instructor interaction, each training course is limited in size.

Participants should plan to attend both days of this 2-day training course. To attend training courses, you must register for a Platinum or Training pass; does not include access to tutorials on Tuesday.

Monday, 29 April - Tuesday, 30 April

Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: London Suite 3
Secondary topics:  Deep Learning, Model lifecycle management
Amir Issaei (Databricks)
The course covers the fundamentals of neural networks and how to build distributed Keras/TensorFlow models on top of Spark DataFrames. Throughout the class, you will use Keras, TensorFlow, Deep Learning Pipelines, and Horovod to build and tune models. You will also use MLflow to track experiments and manage the machine learning lifecycle. NOTE: This course is taught entirely in Python. Read more.
Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: S11 C
Secondary topics:  Deep Learning
Ana Hocevar (The Data Incubator)
The TensorFlow library provides for the use of computational graphs, with automatic parallelization across resources. This architecture is ideal for implementing neural networks. This training will introduce TensorFlow's capabilities in Python. It will move from building machine learning algorithms piece by piece to using the Keras API provided by TensorFlow with several hands-on applications. Read more.
Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: Capital Suite 1
Secondary topics:  Data preparation, data governance, and data lineage
Zachary Glassman (The Data Incubator)
We will walk through all the steps - from prototyping to production - of developing a machine learning pipeline. We’ll look at data cleaning, feature engineering, model building/evaluation, and deployment. Students will extend these models into two applications from real-world datasets. All work will be done in Python. Read more.
Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: Capital Suite 7
Secondary topics:  Deep Learning
Ian Cook (Cloudera)
Advancing your career in data science requires learning new languages and frameworks—but learners face an overwhelming array of choices, each with different syntaxes, conventions, and terminology. Ian Cook simplifies the learning process by elucidating the abstractions common to these systems. Through hands-on exercises, you'll overcome obstacles to getting started using new tools. Read more.
Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: Capital Suite 17
Secondary topics:  AI and machine learning in the enterprise
Angie Ma (ASI Data Science)
Angie Ma and Jonny Howell offer a condensed introduction to key AI and machine learning concepts and techniques, showing you what is (and isn't) possible with these exciting new tools and how they can benefit your organization. Read more.
Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: Capital Suite 16
Secondary topics:  Data Integration and Data Pipelines, Streaming and realtime analytics
Jesse Anderson (Big Data Institute)
Takes a participant through an in-depth look at Apache Kafka. We show how Kafka works and how to create real-time systems with it. It shows how to create consumers and publishers in Kafka. The we look at Kafka’s ecosystem and how each one is used. We show how to use Kafka Streams, Kafka Connect, and KSQL. Read more.
Add to your personal schedule
9:00 - 17:00 Monday, 29 April & Tuesday, 30 April
Location: London Suite 3
Secondary topics:  AI and Data technologies in the cloud, Data Integration and Data Pipelines
Jorge A. Lopez (Amazon Web Services)
Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this workshop, we show you how to incorporate serverless concepts into your big data architectures, looking at design patterns to ingest, store, and analyze your data. You will build a big data application using AWS technologies such as S3, Athena, Kinesis, and more Read more.