14–17 Oct 2019

Speaker slides & video

Presentation slides will be made available after the session has concluded and the speaker has given us the files. Check back if you don't see the file you're looking for—it might be available later! (However, please note some speakers choose not to share their presentations.)

Sridhar Alla (BlueWhale)
Any business, big or small, depends on analytics, whether the goal is revenue generation, churn reduction, or sales or marketing purposes. No matter the algorithm and the techniques used, the result depends on the accuracy and consistency of the data being processed. Sridhar Alla examines some techniques used to evaluate the quality of data and the means to detect the anomalies in the data.
Michael Friedrich and Stefanie Grunwald explore how an algorithm capable of playing Space Invaders can also improve your cloud service's automated scaling mechanism.
Walter Riviera (Intel)
Walter Riviera details three key shifts in the AI landscape—incredibly large models with billions of hyperparameters, massive clusters of compute nodes supporting AI, and the exploding volume of data meeting ever-stricter latency requirements—how to navigate them, and when to explore hardware acceleration.
Rajib Biswas (Ericsson)
Rajib Biswas outlines the application of AI algorithms like generative adversarial networks (GANs) to solve natural language synthesis tasks. Join in to learn how AI can accomplish complex tasks like machine translation, write poetry with style, read a novel, and answer your questions.
Konrad Wawruch (7bulls.com)
Real business usage of most advanced methods for financial time series forecasting (based on winning methods from M4 competition) and assets portfolio optimization (based on Monte Carlo Tree Search with neural networks - Alpha Zero approach). Complete investments platform with the AI workflow and real time integration with the brokers. Real usage demo.
Tuhin Sharma (Binaize), Bargava Subramanian (Binaize)
There's an exponential growth in the number of internet-enabled devices on modern smart buildings. IoT sensors measure temperature, lighting, IP camera, and more. Tuhin Sharma and Bargava Subramanian explain how they built anomaly-detection models using federated learning—which is privacy preserving and doesn't require data to be moved to the cloud—for data quality and cybersecurity.
AI-powered market research is performed by indirect approaches based on sparse and implicit consumer feedback (e.g., social network interactions, web browsing, or online purchases). These approaches are more scalable, authentic, and suitable for real-time consumer insights. Gianmario Spacagna proposes a novel algorithm of audience projection able to provide consumer insights over multiple domains.
Tyler Dunn (Rasa)
AI assistants are getting a great deal of attention from the industry and research. However, the majority of assistants built to this day are still developed using a state machine and a set of rules. That doesn’t scale in production. Tyler Dunn explores how to build AI assistants that go beyond FAQ interactions using machine learning and open source tools.
Antje Barth (AWS)
Container and cloud native technologies around Kubernetes have become the de facto standard in modern ML and AI application development. Antje Barth examines common architecture blueprints and popular technologies used to integrate AI into existing infrastructures and explains how you can build a production-ready containerized platform for deep learning.
Jameson Toole (Fritz AI)
Getting machine learning models ready for use on device is a major challenge. Drag-and-drop training tools can get you started, but the models they produce aren’t small enough or fast enough to ship. Jameson Toole walks you through optimization, pruning, and compression techniques to keep app sizes small and inference speeds high.
Thomas Phelan (HPE BlueData)
Today, organizations understand the need to keep pace with new technologies when it comes to performing data science with machine learning and deep learning, but these new technologies come with their own challenges. Thomas Phelan demonstrates the deployment of TensorFlow, Horovod, and Spark using the NVIDIA CUDA stack on Docker containers in a secure multitenant environment.
Yan Zhang (Microsoft), Mathew Salvaris (Microsoft)
When IoT meets AI, a new round of innovations begins. Yan Zhang and Mathew Salvaris examine the methodology, practice, and tools around deploying machine learning models on the edge. They offer a step-by-step guide to creating an ML model using Python, packaging it in a Docker container, and deploying it as a local service on an edge device as well as deployment on GPU-enabled edge devices.
Rebecca Gu (Electron), Cris Lowery (Baringa Partners)
In a future of widespread algorithmic pricing, cooperation between algorithms is easier than ever, resulting in coordinated price rises. Rebecca Gu and Cris Lowery explore how a Q-learner algorithm can inadvertently reach a collusive outcome in a virtual marketplace, which industries are likely to be subject to greater restrictions or scrutiny, and what future digital regulation might look like.
Katharine Jarmul (KIProtect)
Katharine Jarmul sates your curiosity about how far we've come in implementing privacy within machine learning systems. She dives into recent advances in privacy measurements and explains how this changed the approach of privacy in machine learning. You'll discover new techniques including differentially private data collection, federated learning, and homomorphic techniques.
Anastasia Kouvela (A.T. Kearney ), Bharath Thota (A.T. Kearney)
The Analytics Impact Index gives organizations an understanding of the value potential of analytics as well as the capabilities required to capture the most value. Anastasia Kouvela and Bharath Thota walk you through the 2019 results and the analytics journey of leading global organizations and empower companies to develop a case for change.
Tim Daines (QuantumBlack), Philip Pilgerstorfer (QuantumBlack)
Data scientists feel naturally comfortable with the language of mathematics, while designers think in the language of human empathy. Creating a bridge between the two was essential to the success of a recent project at an energy company. Tim Daines and Philip Pilgerstorfer detail what they learned while creating these bridges, showcasing techniques through a series of “aha” moments.
Voiced-based AI continues to gain popularity among customers, businesses, and brands, but it’s important to understand that, while it presents a slew of new data at our disposal, the technology is still in its infancy. Andreas Kaltenbrunner examines three ways voice assistants will make big data analytics more complex and the various steps you can take to manage this in your company.
Mark Madsen (Teradata)
The growing complexity of data science leads to black box solutions that few people in an organization understand. Mark Madsen explains why reproducibility—the ability to get the same results given the same information—is a key element to build trust and grow data science use. And one of the foundational elements of reproducibility (and successful ML projects) is data management.
Paco Nathan (derwen.ai)
Paco Nathan outlines the history and landscape for vendors, open source projects, and research efforts related to AutoML. Starting from the perspective of an AI expert practitioner who speaks business fluently, Paco unpacks the ground truth of AutoML—translating from the hype into business concerns and practices in a vendor-neutral way.
Charlotte Han (Independent)
According to research by AI2, China is poised to overtake the US in the most-cited 1% of AI research papers by 2025. The view that China is a copycat but not an innovator may no longer be true. Charlotte Han explores what the implications of China's government funding, culture, and access to massive data pools mean to AI development and how the world could benefit from such advancement.
Ritika Gunnar explores why you need to focus on your organization’s culture and build a data-first approach to shape a strong, AI-ready organization.
Thomas Phelan (HPE BlueData)
Join Thomas Phelan to learn whether the combination of containers with large-scale distributed data analytics and machine learning applications is like combining oil and water or like peanut butter and chocolate.
Holger Kyas (Open Group, Helvetia Insurances, University of Applied Sciences)
Holger Kyas details the AI multicloud broker, which is triggered by Amazon Alexa and mediates between AWS Comprehend (Amazon), Azure Text Analytics (Microsoft), GCP Natural Language (Google), and Watson Tone Analyzer (IBM) to compare and analyze sentiment. The extended AI part generates new sentences (e.g., marketing slogans) with a recurrent neural network (RNN).
Kim Hazelwood (Facebook), Mohamed Fawzy (Facebook)
AI plays a key role in achieving Facebook's mission of connecting people and building communities. Nearly every visible product is powered by machine learning algorithms at its core, from delivering relevant content to making the platform safe. Kim Hazelwood and Mohamed Fawzy explain how applied ML has continued to change the landscape of the platforms and infrastructure at Facebook.
Zhe Zhang (LinkedIn)
From people you may know (PYMK) to economic graph research, machine learning is the oxygen that powers how LinkedIn serves its 630M+ members. Zhe Zhang provides you with an architectural overview of LinkedIn’s typical machine learning pipelines complemented with key types of ML use cases.
Tobias Martens (whoelse.ai)
More than 50% of all interactions between humans and machines are expected to be speech-based by 2022. The challenge: Every AI interprets human language slightly different. Tobias Martens details current issues in NLP interoperability and uses Chomsky's theory of universal hard-wired grammar to outline a framework to make the human voice in AI universal, accountable, and computable.
Lyndon Leggate walks you through a step-by-step demonstration of how you can up level your reinforcement learning (RL) skills through autonomous driving.
Manas R Kar (Episource)
Natural language processing (NLP) is hard, especially for clinical text. Manas Ranjan Kar explains the multiple challenges of NLP for clinical text and why it's so important that we invest a fair amount of time on domain-specific feature engineering. It’s also crucial to understand to diagnose an NLP model performance and identify possible gaps.
Ganes Kesari (Gramener), Soumya Ranjan (Gramener)
In many countries, policy decisions are disconnected from data, and very few avenues exist to understand deeper demographic and socioeconomic insights. Ganes Kesari and Soumya Ranjan explain how satellite imagery can be a powerful aid when viewed through the lens of deep learning. When combined with conventional data, it can help answer important questions and show inconsistencies in survey data.
Emily Webber (Amazon Web Services)
If you've ever wondered if you could use AI to inform public policy, join Emily Webber as she combines classic economic methods with AI techniques to train a reinforcement learning agent on decades of randomized control trials. You'll learn about classic philosophical foundations for public policy decision making and how these can be applied to solve the problems that impact the many.
Jeff Jonas (Senzing)
Entity resolution—determining “who is who” and “who is related to whom”—is essential to almost every industry, including banking, insurance, healthcare, marketing, telecommunications, social services, and more. Jeff Jonas details how you can use a purpose-built real-time AI, created for general-purpose entity resolution, to gain new insights and make better decisions faster.
Jim Dowling (Logical Clocks), Ajit Mathews (AMD)
The Radeon open ecosystem (ROCm) is an open source software foundation for GPU computing on Linux. ROCm supports TensorFlow and PyTorch using MIOpen, a library of highly optimized GPU routines for deep learning. Jim Dowling and Ajit Mathews outline how the open source Hopsworks framework enables the construction of horizontally scalable end-to-end machine learning pipelines on ROCm-enabled GPUs.
Edward Oakes (UC Berkeley Electrical Engineering & Computer Sciences), Peter Schafhalter (UC Berkeley RISELab), Kristian Hartikainen (University of Oxford)
Edward Oakes, Peter Schafhalter, and Kristian Hartikainen take a deep dive into Ray, a new distributed execution framework for distributed AI applications developed by machine learning and systems researchers at RISELab, and explore Ray’s API and system architecture and sharing application examples, including several state-of-the-art distributed training, hyperparameter search, and RL algorithms.
Ian Massingham (Amazon Web Services)
Reinforcement learning is an advanced machine learning technique that makes short-term decisions while optimizing for a longer-term goal through trial and error. Ian Massingham dives into state-of-the-art techniques in deep reinforcement learning for a variety of use cases.
Robert Crowe (Google), Pedram Pejman (Google)
Putting together an ML production pipeline for training, deploying, and maintaining ML and deep learning applications is much more than just training a model. Robert Crowe and Pedram Pejman explore Google's TFX, an open source version of the tools and libraries that Google uses internally, made using its years of experience in developing production ML pipelines.
The AI revolution is poised to scale both machine and human knowledge. To generate that knowledge, companies must think differently about AI and how to deploy it. Alexis will cover the three “Be’s”, and how to approach AI systematically to truly harness knowledge at scale.
Ihab Ilyas (University of Waterloo)
Ihab Ilyas highlights the data-quality problem and describes the HoloClean framework, a state-of-the-art prediction engine for structured data with direct applications in detecting and repairing data errors, as well as imputing missing labels and values.
Arash Ghazanfari (Dell Technologies)
As we look toward more demanding applications of artificial intelligence to unlock value from data, it's increasingly essential to develop a sustainable big data strategy and to efficiently scale artificial intelligence initiatives. Arash Ghazanfari covers the fundamental principles that need to be considered in order to achieve this goal.
Raffaello D’Andrea (Verity | ETH Zurich)
It's hard ignore the attention given to autonomy and robotics. The impact is significant and the reach is extensive, hitting transportation with self-driving cars, logistics and supply with mobile robots, and remote sensing applications with aerial vehicles or drones. Raffaello D'Andrea explores how autonomous indoor drones will drive the next wave of autonomous robotics development and growth.
Marta Kwiatkowska (University of Oxford)
Machine learning solutions are revolutionizing AI, but Marta Kwiatkowska explores their instability against adversarial examples—small perturbations to inputs that can catastrophically affect the output—which raises concerns about the readiness of this technology for widespread deployment.
  • Intel AI
  • O'Reilly
  • Amazon Web Services
  • IBM Watson
  • Dell Technologies
  • Hewlett Packard Enterprise
  • AXA

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

aisponsorships@oreilly.com

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires