Presented By O’Reilly and Intel AI
Put AI to work
8-9 Oct 2018: Training
9-11 Oct 2018: Tutorials & Conference
London, UK

Models and Methods

 

13:30–17:00 Tuesday, 9 October 2018
Location: Buckingham Room - Palace Suite
Secondary topics:  Deep Learning models, Financial Services, Temporal data and time-series
Yijing Chen (Microsoft), Dmitry Pechyoni (Microsoft), Angus Taylor (Microsoft), Vanja Paunic (Microsoft)
Average rating: ***..
(3.67, 3 ratings)
Buisnesses use forecasting to make better decisions and allocate resources more effectively. Recurrent neural networks (RNNs) have achieved a lot of success in text, speech, and video analysis but are less used for time series forecasting. Join Yijing Chen, Dmitry Pechyoni, Angus Taylor, and Vanja Paunic to learn how to apply RNNs to time series forecasting. Read more.
11:55–12:35 Wednesday, 10 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Deep Learning models, Ethics, Privacy, and Security
Ryan Micallef (Cloudera Fast Forward Labs)
Average rating: ****.
(4.00, 1 rating)
Imagine building a model whose training data is collected on edge devices such as cell phones or sensors. Each device collects data unlike any other, and the data cannot leave the device because of privacy concerns or unreliable network access. This challenging situation is known as federated learning. Ryan Micallef discusses the algorithmic solutions and the product opportunities. Read more.
11:55–12:35 Wednesday, 10 October 2018
Location: Windsor Suite
Secondary topics:  Deep Learning models, Ethics, Privacy, and Security
Alan Mosca (nPlan)
Alan Mosca shows how any deep learning model can be improved and made more secure with the use of targeted ensemble methods and other similar techniques and demonstrates how to use these techniques in the Toupee deep learning framework to create production-ready models. Read more.
13:45–14:25 Wednesday, 10 October 2018
Location: Windsor Suite
Secondary topics:  Temporal data and time-series
Business forecasting generally employs machine learning methods for longer and nonlinear use cases and econometrics approaches for linear trends. Pasi Helenius and Larry Orimoloye outline a hybrid approach that combines deep learning and econometrics. This method is particularly useful in areas such as competitive event (CE) forecasting (e.g., in sports events political events). Read more.
14:35–15:15 Wednesday, 10 October 2018
Location: King's Suite - Sandringham
Secondary topics:  Deep Learning models, Edge computing and Hardware
Bruno Fernandez-Ruiz details a unified network that jointly performs various mission-critical tasks in real time on a mobile environment, within the context of driving. Along the way, he outlines the challenges that emerge when training a single mobile network for multiple tasks, such as object detection, object attributes recognition, classification, and tracking. Read more.
14:35–15:15 Wednesday, 10 October 2018
Location: Windsor Suite
Secondary topics:  Deep Learning models, Temporal data and time-series
Andrea Pasqua (Uber)
Andrea Pasqua investigates the merits of using deep learning and other machine learning approaches in the area of forecasting and describes some of the machine learning approaches Uber uses to forecast time series of business relevance. Read more.
16:00–16:40 Wednesday, 10 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Media, Marketing, Advertising, Text, Language, and Speech
Rahul Dodhia (Microsoft)
Artificial intelligence is mature enough to make substantial contributions to the legal industry. Rahul Dodhia offers an overview of an AI assistant that can perform routine tasks such as contract review and checking compliance with regulations at higher accuracy rates than legal professionals. Read more.
16:00–16:40 Wednesday, 10 October 2018
Location: King's Suite - Sandringham
Secondary topics:  Computer Vision, Deep Learning models, Ethics, Privacy, and Security, Retail and e-commerce
Pin-Yu Chen (IBM Research AI)
Average rating: *****
(5.00, 1 rating)
Neural networks are particularly vulnerable to adversarial inputs. Carefully designed perturbations can lead a well-trained model to misbehave, raising new concerns about safety-critical and security-critical applications. Pin-Yu Chen offers an overview of CLEVER, a comprehensive robustness measure that can be used to assess the robustness of any neural network classifiers. Read more.
11:05–11:45 Thursday, 11 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Deep Learning models, Temporal data and time-series
Vitaly Kuznetsov (Google), Zelda Mariet (MIT)
Vitaly Kuznetsov and Zelda Mariet compare sequence-to-sequence modeling to classical time series models and provide the first theoretical analysis of a framework that uses sequence-to-sequence models for time series forecasting. Read more.
11:05–11:45 Thursday, 11 October 2018
Location: Hilton Meeting Room 3-6
Secondary topics:  Media, Marketing, Advertising, Text, Language, and Speech
Ryan Micallef (Cloudera Fast Forward Labs)
Multitask learning is an approach to problem solving that allows supervised algorithms to master more than one objective in parallel. Ryan Micallef shares a multitask neural net in PyTorch trained to classify news from several publications, which highlights distinct language use per publication enabled by the analysis of task-specific and agnostic representations part of multitask networks. Read more.
11:55–12:35 Thursday, 11 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Deep Learning models
David Barber (UCL)
While great strides have been made in perceptual AI (for example, in speech recognition), there's been relatively modest progress in reasoning AI—systems that can interact with us in natural ways and understand the objects in our environment. David Barber explains why general AI will be out of reach until we address how to endow machines with knowledge of our environment. Read more.
11:55–12:35 Thursday, 11 October 2018
Location: King's Suite - Sandringham
Secondary topics:  Deep Learning models, Retail and e-commerce, Text, Language, and Speech
Dafna Shahaf (The Hebrew University of Jerusalem)
Average rating: ****.
(4.00, 1 rating)
The availability of large idea repositories (e.g., patents) could significantly accelerate innovation and discovery by providing people inspiration from solutions to analogous problems. Dafna Shahaf presents an algorithm that automatically discovers analogies in unstructured data and demonstrates how these analogies significantly increased people's likelihood of generating creative ideas. Read more.
13:45–14:25 Thursday, 11 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Media, Marketing, Advertising, Text, Language, and Speech
GUY FEIGENBLAT (IBM Research AI)
Average rating: ****.
(4.00, 3 ratings)
Automatic summarization is the computational process of shortening one or more text documents in order to identify their key points. Guy Feigenblat surveys recent advances in unsupervised automated summarization technologies and discusses recent research publications and datasets. Guy concludes with an overview of a novel summarization technology developed by IBM. Read more.
13:45–14:25 Thursday, 11 October 2018
Location: Windsor Suite
Secondary topics:  Computer Vision, Deep Learning models, Retail and e-commerce
Florian Wilhelm (inovex GmbH)
Average rating: *****
(5.00, 1 rating)
Even in the age of big data, labeled data is a scarce resource in many machine learning use cases. Florian Wilhelm evaluates generative adversarial networks (GANs) when used to extract information from vehicle registrations under a varying amount of labeled data, compares the performance with supervised learning techniques, and demonstrates a significant improvement when using unlabeled data. Read more.
13:45–14:25 Thursday, 11 October 2018
Location: Westminster Suite
Secondary topics:  Computer Vision, Deep Learning models, Text, Language, and Speech
Lars Hulstaert (Microsoft)
Transfer learning allows data scientists to leverage insights from large labeled datasets. The general idea of transfer learning is to use knowledge learned from tasks for which a lot of labeled data is available in settings where only little labelled data is available. Lars Hulstaert explains what transfer learning is and demonstrates how it can boost your NLP or CV pipelines. Read more.
14:35–15:15 Thursday, 11 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Financial Services, Media, Marketing, Advertising, Text, Language, and Speech
Amy Heineike (Primer)
Average rating: ***..
(3.67, 3 ratings)
When building natural language processing (NLP)-based applications, you quickly learn that no single NLP algorithm can handle the wide range of tasks required to turn text into value. Amy Heineike explains how she orchestrates natural language processing, understanding, and generation algorithms to build text-based AI applications for Fortune 500 companies. Read more.
14:35–15:15 Thursday, 11 October 2018
Location: Westminster Suite
Secondary topics:  Data Networks and Data Markets
Roger Chen (Computable)
Blockchain technologies offer new internet primitives for creating open and online data marketplaces. Roger Chen explores how data markets can be constructed and how they offer a shared resource on the internet for AI-based research, discovery, and development. Read more.
14:35–15:15 Thursday, 11 October 2018
Location: Hilton Meeting Room 3-6
Secondary topics:  Deep Learning models, Interfaces and UX, Text, Language, and Speech
Peter Cahill (Voysis)
Peter Cahill explains why Wavenet will be the next generation of recognition, synthesis, and voice-activity detection. Read more.
16:00–16:40 Thursday, 11 October 2018
Location: King's Suite - Balmoral
Secondary topics:  Reinforcement Learning, Retail and e-commerce, Text, Language, and Speech
Dr. Sid J Reddy (Conversica)
Sid Reddy shows you how to avoid the hype and decide which use cases are the best for deep reinforcement learning. You'll explore the Markov decision process with conversational AI and learn how to set up the environment, states, agent actions, transition probabilities, reward functions, and end states. You'll also discover when to use end-to-end reinforcement learning. Read more.
16:00–16:40 Thursday, 11 October 2018
Location: King's Suite - Sandringham
Secondary topics:  Computer Vision, Edge computing and Hardware, Platforms and infrastructure
Paul Brasnett (Imagination Technologies )
In recent years, we’ve seen a shift from traditional vision algorithms to deep neural network algorithms. While many companies expect to move to deep learning for some or all of their algorithms, they may have a significant investment in classical vision. Paul Brasnett explains how to express and adapt a classical vision algorithm to become a trainable DNN. Read more.
16:00–16:40 Thursday, 11 October 2018
Location: Windsor Suite
Secondary topics:  Computer Vision, Deep Learning models, Deep Learning tools
Vanja Paunic (Microsoft), Patrick Buehler (Microsoft)
Average rating: **...
(2.00, 2 ratings)
Dramatic progress has been made in computer vision. Deep neural networks (DNNs) trained on millions of images can recognize thousands of different objects, and they can be customized to new use cases. Vanja Paunic and Patrick Buehler outline simple methods and tools that enable users to easily and quickly adapt Microsoft's state-of-the-art DNNs for use in their own computer vision solutions. Read more.