Sep 9–12, 2019

Speaker slides & video

Presentation slides will be made available after the session has concluded and the speaker has given us the files. Check back if you don't see the file you're looking for—it might be available later! (However, please note some speakers choose not to share their presentations.)

Ananth Sankaranarayanan discusses three key shifts in the AI landscape—incredibly large models with billions of hyperparameters, massive clusters of compute nodes supporting AI, and the exploding volume of data meeting ever-stricter latency requirements—how to navigate them, and when to explore hardware acceleration.
Daniel Russakoff (Voxeleron)
The emphasis in AI is on replicating human performance. Examples abound: ImageNet, self-driving cars, etc. It’s the same in medicine. Daniel Russakoff explains how Voxeleron LLC is working on what’s next—AI algorithms that do things that humans can’t, such as the prediction of age-related macular degeneration (AMD) progression, critical to successful treatment of this leading cause of vision loss.
Kristian Hammond (Northwestern Computer Science)
Even as AI technologies move into common use, many enterprise decision makers remain baffled about what the different technologies actually do and how they can be integrated into their businesses. Rather than focusing on the technologies alone, Kristian Hammond provides a practical framework for understanding your role in problem solving and decision making.
While network protocols are the language of the conversations among devices in a network, these conversations are hardly ever labeled. Advances in capturing semantics present an opportunity for capturing access semantics to model user behavior. Ram Janakiraman explains how, with strong embeddings as a foundation, behavioral use cases can be mapped to NLP models of choice.
Kai Liu (BING) (Microsoft), Yuqi Wang (Microsoft), Bin Wang (Microsoft)
Bing in Microsoft runs large, complex workflows and services, but there was no existing solutions that met its needs. So it created and open-sourced FrameworkLauncher. Kai Liu, Yuqi Wang, and Bin Wang explore the solution, built to orchestrate workloads on YARN through the same interface without changes to the workloads, including large-scale long-running services, batch jobs, and streaming jobs.
Michael Radwin (Intuit)
Design thinking is a methodology for creative problem-solving developed at the Stanford d.school. The methodology is used by world-class design firms like IDEO and many of the world's leading brands like Apple, Google, Samsung, and GE. Michael Radwin prepares a recipe for how to apply design thinking to the development of AI/ML products.
Angela Wu (Determined AI), Sidney Wijngaarde (Determined AI), Shiyuan Zhu (Determined AI), Vishnu Mohan (Determined AI)
Success with DL requires more than just TensorFlow or PyTorch. Angela Wu, Sidney Wijngaarde, Shiyuan Zhu, and Vishnu Mohan detail practical problems faced by practitioners and the software tools and techniques you'll need to address the problems, including data prep, GPU scheduling, hyperparameter tuning, distributed training, metrics management, deployment, mobile and edge optimization, and more.
Alex (Tianchu) Liang (American Tire Distributors)
Deep learning has been a sweeping revolution in the world of AI and machine learning. But sometimes traditional industries can be left behind. Alex Liang details two solutions where deep learning is used: a warehouse staffing solution where LSTM RNNs are used for staffing level forecasting and a pricing recommendation solution where DNNs were used for data clustering and demand modeling.
Lindsay Hiebert (Intel), Vikrant Viniak (Accenture)
Join Lindsay Hiebert and Vikrant Viniak as they explore challenges for developers as they design a product that solves a real-world problem using the power of AI and IoT. To unlock the potential of AI at the edge, Intel launched its Intel AI: In Production ecosystem to accelerate prototype to production at the edge with Intel and partner offerings.
Chris Butler (IPsoft)
Purpose, a well-defined problem, and trust are important factors to any system, especially those that employ AI. Chris Butler leads you through exercises that borrow from the principles of design thinking to help you create more impactful solutions and better team alignment.
Sarah Bird (Microsoft)
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learning in many current and future real-world applications. Sarah Bird outlines her perspective on some of the major challenges in responsible AI development and examines promising new tools and technologies to help enable it in practice.
Andrew Feldman (Cerebras Systems)
The first announced element of the Cerebras solution is the Wafer Scale Engine (WSE). The WSE is the largest chip ever built. It contains 1.2 trillion transistors and covers more than 46,225 square millimeters of silicon. In this talk, we will share some of the details of WSE and discuss its impact on the industry.
Jike Chong (LinkedIn | Tsinghua University), Yue Cathy Chang (TutumGene)
Domain insights are crucial for successful AI/ML initiatives. This talk discusses three areas of concerns: clarification of business context, awareness of nuances of data sources, and navigating organizational structure.
Paco Nathan (derwen.ai)
Paco Nathan outlines the history and landscape for vendors, open source projects, and research efforts related to AutoML. Starting from the perspective of an AI expert practitioner who speaks business fluently, Paco unpacks the ground truth of AutoML—translating from the hype into business concerns and practices in a vendor-neutral way.
Ramesh Radhakrishnan (Dell Technologies), John Zedlewski (NVIDIA)
Data scientists and machine learning engineers need the flexibility to work in multiple environments without wasting precious time configuring hardware and software and modifying code. Ramesh Radhakrishnan and John Zedlewski walk you through deploying a simple set of technologies for executing end-to-end pipelines entirely on GPUs.
Eric Gardner (Intel)
Businesses recognize the transformational potential for advanced analytics, machine, and deep learning but often get lost on their path to AI. Eric Gardner spends his days advising customers about AI and shares a four-step journey that organizations of every kind can use to evaluate their unique path from data to insight.
Skyler Thomas (MapR)
The popular open source Kubeflow project is one of the best ways to start doing machine learning and AI on top of Kubernetes. However, Kubeflow is a huge project with dozens of large complex components. Skyler Thomas dives into the Kubeflow components and how they interact with Kubernetes. He explores the machine learning lifecycle from model training to model serving.
Kushal Datta (Intel)
Kushal Datta specializes in optimizing AI applications on CPUs; hear two of his latest customer success stories and get the details behind the technical collaboration that led to incredible performance for AI on CPU.
Srinivas Narayanan (Facebook AI)
Srinivas Narayanan takes you beyond fully supervised learning techniques, the next change in AI.
Holden Karau (Independent)
Modeling is easy—productizing models, less so. Distributed training? Forget about it. Say hello to Kubeflow with Holden Karau—a system that makes it easy for data scientists to containerize their models to train and serve on Kubernetes.
Wei Cai (Cox Communications)
Real-time traffic volume prediction is vital in proactive network management, and many forecasting models have been proposed to address this. However, most are unable to fully use the information in traffic data to generate efficient and accurate traffic predictions for a longer term. Wei Cai explores predicting multistep, real-time traffic volume using many-to-one LSTM and many-to-many LSTM.
Alessandro Palladini (Music Tribe)
Alessandro Palladini explores the role of experts and creatives in a world dominated by intelligent machines by bridging the gap between the research on complex systems and tools for creativity, examining what he believes to be the key design principles and perspectives on making intelligent tools for creativity and for experts in the loop.
Sijun He (Twitter), Ali Mollahosseini (Twitter)
Twitter is what’s happening in the world right now. To connect users with the best content, Twitter needs to build a deep understanding of its noisy and temporal text content. Sijun He and Ali Mollahosseini explore the named entity recognition (NER) system at Twitter and the challenges Twitter faces to build and scale a large-scale deep learning system to annotate 500 million tweets per day.
Michael Jordan (UC Berkeley)
Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. Michael Jordan details the aim to blend gradient-based methodology with game-theoretic goals as part of a large "microeconomics meets machine learning" program.
Kenneth Stanley (Uber AI Labs | University of Central Florida)
We think a lot in machine learning about encouraging computers to solve problems, but there's another kind of learning, called open-endedness, that's just beginning to attract attention in the field. Kenneth Stanley walks you through how open-ended algorithms keep on inventing new and ever-more complex tasks and solving them continually—even endlessly.
Hagay Lupesko (Facebook)
Hagay Lupesko explores AI-powered personalization at Facebook and the challenges and practical techniques it applied to overcome these challenges. You'll learn about deep learning-based personalization modeling, scalable training, and the accompanying system design approaches that are applied in practice.
Sahika Genc (Amazon)
Sahika Genc dives deep into the current state-of-the-art techniques in deep reinforcement learning (DRL) for a variety of use cases. Reinforcement learning (RL) is an advanced machine learning (ML) technique that makes short-term decisions while optimizing for a longer-term goal through trial and error.
Joel Grus (Allen Institute for Artificial Intelligence)
AllenNLP is a PyTorch-based library designed to make it easy to do high-quality research in natural language processing (NLP). Joel Grus explains what modern neural NLP looks like; you'll get your hands dirty training some models, writing some code, and learning how you can apply these techniques to your own datasets and problems.
Amit Kapoor (narrativeVIZ), Bargava Subramanian (Binaize)
Recommendation systems play a significant role—for users, a new world of options; for companies, it drives engagement and satisfaction. Amit Kapoor and Bargava Subramanian walk you through the different paradigms of recommendation systems and introduce you to deep learning-based approaches. You'll gain the practical hands-on knowledge to build, select, deploy, and maintain a recommendation system.
Ashish Bansal (Twitter)
Twitter has amazing and unique content generated at an enormous velocity internationally in multiple languages. Ashish Bansal provides you with insight into the unique recommendation system challenges at Twitter’s scale and what makes this a fun and challenging task.
Shashank Prasanna (Amazon Web Services)
Machine learning involves a lot of experimentation. Data scientists spend days, weeks, or months performing algorithm searches, model architecture searches, hyperparameter searches, etc. Shashank Prasanna breaks down how you can easily run large-scale machine learning experiments using containers, Kubernetes, Amazon ECS, and SageMaker.
Lei Pan (Nauto)
Lei Pan examines how Nauto uses Amazon SageMaker and other AWS services, including Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS) to continually evolve smarter data for driver behavior.
Urs Köster (Cerebras Systems)
Long training times are the single biggest factor slowing down innovation in deep learning. Today's common approach of scaling large workloads out over many small processors is inefficient and requires extensive model tuning. Urs Köster explains why with increasing model and dataset sizes, new ideas are needed to reduce training times.
Ting-Fang Yen (DataVisor)
Ting-Fang Yen details a monitor for production machine learning systems that handle billions of requests daily. The approach discovers detection anomalies, such as spurious false positives, as well as gradual concept drifts when the model no longer captures the target concept. You'll see new tools for detecting undesirable model behaviors early in large-scale online ML systems.
Robert Crowe (Google)
Putting together an ML production pipeline for training, deploying, and maintaining ML and deep learning applications is much more than just training a model. Robert Crowe explores Google's open source community TensorFlow Extended (TFX), an open source version of the tools and libraries that Google uses internally, made using its years of experience in developing production ML pipelines.
Triveni Gandhi (Dataiku)
With the adoption of AI in the enterprise accelerating, its impacts—both positive and negative—are rapidly increasing. Triveni Gandhi explores why the builders of these new AI capabilities all bear some moral responsibility for ensuring that their products create maximum benefit and minimal harm.
Jonathan Peck (GitHub)
ML has been advancing rapidly, but only a few contributors focus on the infrastructure and scaling challenges that come with it. Jonathan Peck explores why ML is a natural fit for serverless computing, a general architecture for scalable ML, and common issues when implementing on-demand scaling over GPU clusters, providing general solutions and a vision for the future of cloud-based ML.
Huaixiu Zheng (Uber)
Uber applies natural language processing (NLP) and conversational AI in a number of business domains. Huaixiu Zheng details how Uber applies deep learning in the domain of NLP and conversational AI. You'll learn how Uber implements AI solutions in a real-world environment, as well as cutting-edge research in end-to-end dialogue systems.
Dinesh Nirmal examines how, with a unified, prescriptive information architecture, organizations can successfully unlock the value of their data for AI as well as trust and control the business impact and risks of AI while coexisting in a multicloud world.
  • Intel AI
  • O'Reilly
  • Amazon Web Services
  • IBM Watson
  • Dataiku
  • Dell Technologies
  • Intuit
  • Gamalon
  • H2O.ai
  • Hewlett Packard Enterprise
  • MapR Technologies
  • Sisu Data
  • Intuit

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires