14–17 Oct 2019

Presentations

Sridhar Alla (BlueWhale)
Any business, big or small, depends on analytics, whether the goal is revenue generation, churn reduction, or sales or marketing purposes. No matter the algorithm and the techniques used, the result depends on the accuracy and consistency of the data being processed. Sridhar Alla examines some techniques used to evaluate the quality of data and the means to detect the anomalies in the data.
Vignesh Gopakumar (United Kingdom Atomic Energy Authority)
Vignesh Gopakumar explores image mapping of the temporal evolution of physics parameters as plasma interacts with the reactor wall using a data-inferred approach. The model captures how parameters such as temperature and density evolve across space and time. By analyzing the patterns found in simulation data, the model learns the existing physics relations implicitly defined within the data.
Alejandro Saucedo (The Institute for Ethical AI & Machine Learning)
Alejandro Saucedo demystifies AI explainability through a hands-on case study, where the objective is to automate a loan-approval process by building and evaluating a deep learning model. He introduces motivations through the practical risks that arise with undesired bias and black box models and shows you how to tackle these challenges using tools from the latest research and domain knowledge.
Julien Simon (Amazon Web Services)
Many natural language processing (NLP) tasks require each word in the input text to be mapped to a vector of real numbers. Julien Simon explores word vectors, why they’re so important, and which are the most popular algorithms to compute them (Word2Vec, GloVe, BERT). You'll get to see how to solve typical NLP problems through several demos by either computing embeddings or reusing pretrained ones.
Michael Friedrich and Stefanie Grunwald explore how an algorithm capable of playing Space Invaders can also improve your cloud service's automated scaling mechanism.
Walter Riviera (Intel)
Walter Riviera details three key shifts in the AI landscape—incredibly large models with billions of hyperparameters, massive clusters of compute nodes supporting AI, and the exploding volume of data meeting ever-stricter latency requirements—how to navigate them, and when to explore hardware acceleration.
Rajib Biswas (Ericsson)
Rajib Biswas outlines the application of AI algorithms like generative adversarial networks (GANs) to solve natural language synthesis tasks. Join in to learn how AI can accomplish complex tasks like machine translation, write poetry with style, read a novel, and answer your questions.
Don't miss AI at Night, happening on Wednesday after the Attendee Reception.
Walter Riviera (Intel)
What are the essentials steps to take in order to develop an AI solution? How long would this process would take? As machine learning is teaching us, the answers can be learned from previous experience. Walter Riviera walks you through a collection of real-life stories, looking for successful and misleading behavioral patterns.
Angie Ma (Faculty), Richard Sargeant (Faculty), Joshua Muncke (Faculty Science Ltd)
Angie Ma and Richard Sargeant offer a condensed introduction to key AI and machine learning concepts and techniques, showing you what is (and isn't) possible with these exciting new tools and how they can benefit your organization.
Angie Ma and Richard Sargeant offer a condensed introduction to key AI and machine learning concepts and techniques, showing you what is (and isn't) possible with these exciting new tools and how they can benefit your organization.
Konrad Wawruch (7bulls.com)
Real business usage of most advanced methods for financial time series forecasting (based on winning methods from M4 competition) and assets portfolio optimization (based on Monte Carlo Tree Search with neural networks - Alpha Zero approach). Complete investments platform with the AI workflow and real time integration with the brokers. Real usage demo.
Thomas Henson (Dell Technologies)
As machine learning and deep learning techniques reach mainstream adoption, the architectural considerations for platforms that support large-scale production deployments of AI applications change significantly as you mature beyond small-scale sandbox and POC environments. Thomas Henson walks you through eliminating I/O bottlenecks to keep your GPU-powered AI rocket ship fueled with data.
What if we were able to translate out cultural mannerisms so they were understandable to others? Especially those from other countries, cities, backgrounds and cultures? “Pasta is my true love, just as waiting in queues is yours!” Can AI make a real difference, or is it too affected by biases? And what if it were possible to remove those biases completely?
Tom Sabo (SAS)
Efforts to counter human trafficking internationally must assess data from a variety of sources to determine where best to devote limited resources. Tom Sabo explores text-based machine learning, rule-based text extraction to generate training data for modeling efforts, and interactive visualization to improve international trafficking response.
Tuhin Sharma (Binaize Labs), Bargava Subramanian (Binaize Labs)
There's an exponential growth in the number of internet-enabled devices on modern smart buildings. IoT sensors measure temperature, lighting, IP camera, and more. Tuhin Sharma and Bargava Subramanian explain how they built anomaly-detection models using federated learning—which is privacy preserving and doesn't require data to be moved to the cloud—for data quality and cybersecurity.
Advances in artificial intelligence have meant that it's now more accessible than ever before—and this accessibility means that it can be both the hunter and the hunted. In the race to ensure cybersecurity, AI is an essential tool to protect your most sensitive assets. Join Matt Armstrong-Barnes to find out how this new dimension is changing the threat landscape and how to make AI your friend.
Come enjoy delicious snacks and beverages with fellow AI Conference attendees, speakers, and sponsors at the Attendee Reception, happening immediately after the afternoon sessions on Wednesday.
AI-powered market research is performed by indirect approaches based on sparse and implicit consumer feedback (e.g., social network interactions, web browsing, or online purchases). These approaches are more scalable, authentic, and suitable for real-time consumer insights. Gianmario Spacagna proposes a novel algorithm of audience projection able to provide consumer insights over multiple domains.
Adithya Hrushikesh (Vodafone )
Every day, millions of Vodafone Germany customers reach out through various social media channels about issues related to mobile, internet, signal issues, etc. Adithya Hrushikesh details how to build and deploy an ensemble model to classify 26 (originally 56) complaint classes using machine learning over deep learning. He also touches on the business case, data product development, and GDPR.
Brett A Phaneuf (Submergence Group (US) and MSubs (UK))
Brett Phaneuf outlines how similar types of AI can fit into your company solutions and how technologies like containers, deep learning, cloud, machine learning, and more all fit together to drive innovation for the "new world" of the future.
Danielle Dean (iRobot), Wee Hyong Tok (Microsoft), Mathew Salvaris (Microsoft)
Dive into the the newly released GitHub repository for recommended ways to train and deploy models on Azure with Danielle Dean, Wee Hyong Tok, and Mathew Salvaris. The repository ranges from running massively parallel hyperparameter tuning using Hyperdrive to deploying deep learning models on Kubernetes.
Sergey Ermolin (Amazon Web Services)
Sunil Mallya walks you through building complex ML-enabled products using reinforcement learning (RL), explores hardware design challenges and trade-offs, and details real-life examples of how any developer can up-level their RL skills through autonomous driving.
Tyler Dunn (Rasa)
AI assistants are getting a great deal of attention from the industry and research. However, the majority of assistants built to this day are still developed using a state machine and a set of rules. That doesn’t scale in production. Tyler Dunn explores how to build AI assistants that go beyond FAQ interactions using machine learning and open source tools.
Chang Liu (Georgian Partners ), Ji Chao Zhang (Georgian Partners)
The world is increasingly data driven, and people have developed an awareness and concern for their data. Chang Liu and Ji Chao Zhang examine differential privacy—the component of the TensorFlow Privacy library that allows users to train differentially private logistic regression and support vector machines—along with real-world use cases and demonstrations for how to apply the tools.
Paris Buttfield-Addison (Secret Lab), Tim Nugent (lonely.coffee)
You're building a high-volume, expensive, robot-driven warehouse. Your robots need to get to the right place quickly, find the right item, and sort it to the right place without colliding with each other, the shelves, or people. But you don't have any robots, and you need to start writing the logic and training them. Paris Buttfield-Addison and Tim Nugent outline how to use a simulation to do it.
O'Reilly AI program chairs close the first day of keynotes.
O'Reilly AI program chairs close the second day of keynotes.
Bruno Wassermann (IBM Research)
Imagine there's a new version of your complex machine learning pipeline, but you need to make sure it doesn't negatively impact the performance of large numbers of existing customer models in production. Bruno Wassermann explains how IBM Research tackled the challenge for the natural language understanding layer of the IBM Watson Assistant service and demonstrates a new tool called Clue.
Ilya Feige (Faculty)
Ilya Feige explores AI safety concerns—explainability, fairness, and robustness—relevant for machine learning (ML) models in use today. With concepts and examples, he demonstrates tools developed at Faculty to ensure black box algorithms make interpretable decisions, do not discriminate unfairly, and are robust to perturbed data.
Antje Barth (AWS)
Container and cloud native technologies around Kubernetes have become the de facto standard in modern ML and AI application development. Antje Barth examines common architecture blueprints and popular technologies used to integrate AI into existing infrastructures and explains how you can build a production-ready containerized platform for deep learning.
Umberto Michelucci (TOELT LLC)
Convolutional neural networks (CNNs) are the basis of many algorithms that deal with images, from image recognition and classification to object detection. Using practical examples, Umberto Michelucci walks you through developing convolutional neural networks, using pretrained networks, and even teaching a network to paint. TensorFlow or Keras will be used for all examples.
Convolutional neural networks (CNNs) are the basis of many algorithms that deal with images, from image recognition and classification to object detection. Using practical examples, Umberto Michelucci walks you through developing convolutional neural networks, using pretrained networks, and even teaching a network to paint. TensorFlow or Keras will be used for all examples.
Jameson Toole (Fritz)
Getting machine learning models ready for use on device is a major challenge. Drag-and-drop training tools can get you started, but the models they produce aren’t small enough or fast enough to ship. Jameson Toole walks you through optimization, pruning, and compression techniques to keep app sizes small and inference speeds high.
Siddha Ganju (NVIDIA), Meher Kasam (Square)
Over the last few years, convolutional neural networks (CNNs) have risen in popularity, especially in the area of computer vision. Many mobile applications running on smartphones and wearable devices would benefit from the new opportunities enabled by deep learning techniques. Siddha Ganju and Meher Kasam walk you through optimizing deep neural nets to run efficiently on mobile devices.
Thomas Phelan (HPE BlueData)
Today, organizations understand the need to keep pace with new technologies when it comes to performing data science with machine learning and deep learning, but these new technologies come with their own challenges. Thomas Phelan demonstrates the deployment of TensorFlow, Horovod, and Spark using the NVIDIA CUDA stack on Docker containers in a secure multitenant environment.
Rich Ott (The Data Incubator)
PyTorch is a machine learning library for Python that allows you to build deep neural networks with great flexibility. Its easy-to-use API and seamless use of GPUs make it a sought-after tool for deep learning. Join Rich Ott to get the knowledge you need to build deep learning models using real-world datasets and PyTorch.
PyTorch is a machine learning library for Python that allows you to build deep neural networks with great flexibility. Its easy-to-use API and seamless use of GPUs make it a sought-after tool for deep learning. Join Rich Ott to get the knowledge you need to build deep learning models using real-world datasets and PyTorch.
Michael Cullan (The Data Incubator)
The TensorFlow library provides computational graphs with automatic parallelization across resources—ideal architecture for implementing neural networks. Michael Cullan walks you through TensorFlow's capabilities in Python, from building machine learning algorithms piece by piece to using the Keras API provided by TensorFlow with several hands-on applications.
The TensorFlow library provides computational graphs with automatic parallelization across resources—ideal architecture for implementing neural networks. Michael Cullan walks you through TensorFlow's capabilities in Python, from building machine learning algorithms piece by piece to using the Keras API provided by TensorFlow with several hands-on applications.
Biraja Ghoshal (Tata Consultancy Service)
Deep learning, which involves powerful black box predictors, has achieved state-of-the-art performance in medical imaging analysis, such as segmentation and classification for diagnosis, but knowing how much confidence there is in a prediction is essential for gaining clinicians' trust. Biraja Ghoshal explores probabilistic modeling with TensorFlow Probability in cancer prediction.
Karim Beguir (InstaDeep)
Karim Beguir discusses a system in which an agent that learns to pack boxes efficiently in containers while respecting multiple physical constraints. The agent is trained using reinforcement learning to minimize the wasted space. Without any human knowledge, the agent achieves superhuman performance and outperforms commercial optimization software.
Yan Zhang (Microsoft), Mathew Salvaris (Microsoft)
When IoT meets AI, a new round of innovations begins. Yan Zhang and Mathew Salvaris examine the methodology, practice, and tools around deploying machine learning models on the edge. They offer a step-by-step guide to creating an ML model using Python, packaging it in a Docker container, and deploying it as a local service on an edge device as well as deployment on GPU-enabled edge devices.
Steve Flinter (Mastercard Labs), Ahmed Menshawy (Mastercard Labs)
Steve Flinter and Ahmed Menshaw explore the work that Mastercard Labs undertook to build an end-to-end machine learning pipeline, suitable for both R&D and production, using Kubernetes and Kubeflow. They demonstrate how the pipeline can be defined, configured, connected to a data streaming service, and used to train and deploy a model, which can be exposed for inference via an API.
Developing perception algorithms for autonomous vehicles is incredibly difficult, as they need to operate in thousands of driving conditions and locations. Adam Grzywaczewski explores the challenges involved in data collection, processing, and management, as well as model development and validation. He also provides an overview of the necessary hardware and software infrastructure.
Rebecca Gu (Electron), Cris Lowery (Baringa Partners)
In a future of widespread algorithmic pricing, cooperation between algorithms is easier than ever, resulting in coordinated price rises. Rebecca Gu and Cris Lowery explore how a Q-learner algorithm can inadvertently reach a collusive outcome in a virtual marketplace, which industries are likely to be subject to greater restrictions or scrutiny, and what future digital regulation might look like.
Katharine Jarmul (KIProtect)
Katharine Jarmul sates your curiosity about how far we've come in implementing privacy within machine learning systems. She dives into recent advances in privacy measurements and explains how this changed the approach of privacy in machine learning. You'll discover new techniques including differentially private data collection, federated learning, and homomorphic techniques.
Bahman Bahmani (Rakuten)
Amid fears of sentient killing robots and a freezing AI winter, AI has a true potential to transform the enterprise. Actualizing this potential requires a well-informed organizational strategy and consistent execution of best practices regarding people, processes, and platforms. Bahman Bahmani examines these strategies and best practices and provides insights into their successful execution.
In the rapidly changing world of AI, adopting the right design principles is key. From data scientists and business users to client end users, IBM Watson always seeks to augment their capabilities. Ariadna Font Llitjós examines how IBM Watson applies ethical AI and user-centered design principles from the beginning and leverages them throughout the product development cycle.
Anastasia Kouvela (A.T. Kearney ), Bharath Thota (A.T. Kearney)
The Analytics Impact Index gives organizations an understanding of the value potential of analytics as well as the capabilities required to capture the most value. Anastasia Kouvela and Bharath Thota walk you through the 2019 results and the analytics journey of leading global organizations and empower companies to develop a case for change.
Tim Daines (QuantumBlack), Philip Pilgerstorfer (QuantumBlack)
Data scientists feel naturally comfortable with the language of mathematics, while designers think in the language of human empathy. Creating a bridge between the two was essential to the success of a recent project at an energy company. Tim Daines and Philip Pilgerstorfer detail what they learned while creating these bridges, showcasing techniques through a series of “aha” moments.
Voiced-based AI continues to gain popularity among customers, businesses, and brands, but it’s important to understand that, while it presents a slew of new data at our disposal, the technology is still in its infancy. Andreas Kaltenbrunner examines three ways voice assistants will make big data analytics more complex and the various steps you can take to manage this in your company.
Ted Malaska (Capital One)
While at a big tech conference on AI, it's important to reflect on the human components. Ted Malaska walks you through scenarios and strategies to help different groups work together and explains how to evaluate success and sniff out trouble areas. You'll look at every part of the pipeline to see who's involved and how to optimize the interaction points throughout the pipeline—and how to have fun.
Mark Madsen (Teradata)
The growing complexity of data science leads to black box solutions that few people in an organization understand. Mark Madsen explains why reproducibility—the ability to get the same results given the same information—is a key element to build trust and grow data science use. And one of the foundational elements of reproducibility (and successful ML projects) is data management.
Paco Nathan (derwen.ai)
Paco Nathan outlines the history and landscape for vendors, open source projects, and research efforts related to AutoML. Starting from the perspective of an AI expert practitioner who speaks business fluently, Paco unpacks the ground truth of AutoML—translating from the hype into business concerns and practices in a vendor-neutral way.
Umit Cakmak (IBM)
In every AI initiative, there’s a demand from businesses to protect or increase market share or decrease operational costs. Your competitors are a growing threat, seemingly adopting new technologies better than you. Umit Cakmak examines critical steps from countless client engagements on how to consistently deliver successful AI projects.
Charlotte Han (Independent)
According to research by AI2, China is poised to overtake the US in the most-cited 1% of AI research papers by 2025. The view that China is a copycat but not an innovator may no longer be true. Charlotte Han explores what the implications of China's government funding, culture, and access to massive data pools mean to AI development and how the world could benefit from such advancement.
Arun Verma (Bloomberg)
To gain an edge in the markets, quantitative hedge fund managers require automated processing to quickly extract actionable information from unstructured and increasingly nontraditional sources of data. Arun Verma shares NLP, AI, and ML techniques that help extract derived signals that have significant trading alpha or risk premium and lead to profitable trading strategies.
Martin Benson (Jaywing)
Machine learning has been used in credit scoring for three decades. Martin Benson discusses the history of machine learning in credit scoring and the need for explainable and justified decisions made by machine learning systems. Come find out if it's possible to overcome the black box problem and learn how machine learning systems are evolving and how to bypass the challenges to adoption.
Alex Ingerman (Google)
Federated learning is the approach of training ML models across many devices without collecting the data in a central location. Alex Ingerman explores learning concepts and the use cases for decentralized machine learning, drawing on Google's real-world deployments. You'll learn how to build your first federated models with the open source TensorFlow Federated.
Carlos Rodrigues (Siemens)
An evolving landscape of cyber threats demands innovation. It's time to bring AI to the fight. Carlos Rodrigues explains why it's mandatory to use bleeding-edge AI in production to improve threat detection in a worldwide company such as Siemens. The corporate network has more than 500,000 endpoint and more than 370,000 employees. The attack vectors are endless; thus, legacy approaches don't scale.
Ritika Gunnar explores why you need to focus on your organization’s culture and build a data-first approach to shape a strong, AI-ready organization.
Ritika Gunnar explores why you need to focus on your organization’s culture and build a data-first approach to shape a strong, AI-ready organization.
Carlos Escapa (Amazon Web Services)
Carlos Escapa takes a deep dive into how to identify use cases for ML, acquire cutting-edge best practices to frame problems in a way that key stakeholders and senior management can understand and support, and set the stage for delivering successful ML-based solutions for your business.
Ira Cohen (Anodot), Arun Kejariwal (Independent)
While the role of the manager doesn't require deep knowledge of ML algorithms, it does require understanding how ML-based products should be developed. Ira Cohen explores the cycle of developing ML-based capabilities (or entire products) and the role of the (product) manager in each step of the cycle.
Thomas Phelan (HPE BlueData)
Join Thomas Phelan to learn whether the combination of containers with large-scale distributed data analytics and machine learning applications is like combining oil and water or like peanut butter and chocolate.
If you had five minutes on stage, what would you say? What if you only got 20 slides, and they rotated automatically after 15 seconds? Would you pitch a project? Launch a website? Teach a hack? We’ll find out at our Ignite event at AI London.
Holger Kyas (Open Group, Helvetia Insurances, University of Applied Sciences)
Holger Kyas details the AI multicloud broker, which is triggered by Amazon Alexa and mediates between AWS Comprehend (Amazon), Azure Text Analytics (Microsoft), GCP Natural Language (Google), and Watson Tone Analyzer (IBM) to compare and analyze sentiment. The extended AI part generates new sentences (e.g., marketing slogans) with a recurrent neural network (RNN).
Zhe Zhang (LinkedIn)
Machine learning (ML) engineering differs fundamentally from traditional software engineering in the level of uncertainty and unpredictability of an idea until fully verified in production. Join Zhe Zhang to explore the deciding factor in ML-based products (e.g., recommendation, ranking)—the speed of the trial-and-error loop.
Abhishek Kumar (Publicis Sapient)
Abhishek Kumar outlines how to industrialize capsule networks by detailing capsule networks and how capsule networks help handle spatial relationships between objects in an image and how to apply them to text analytics and tasks such as NLU or summarization. Join in to see a scalable, productionizable implementation of capsule networks over KubeFlow.
Qun Ying (Microsoft)
Anomaly detection may sound old fashioned, yet it's super important in many industry applications. Tony Xing, Bixiong Xu, Congrui Huang, and Qun Ying detail a novel anomaly-detection algorithm based on spectral residual (SR) and convolutional neural network (CNN) and explain how this method was applied in the monitoring system supporting Microsoft AIOps and business incident prevention.
Kim Hazelwood (Facebook), Mohamed Fawzy (Facebook)
AI plays a key role in achieving Facebook's mission of connecting people and building communities. Nearly every visible product is powered by machine learning algorithms at its core, from delivering relevant content to making the platform safe. Kim Hazelwood and Mohamed Fawzy explain how applied ML has continued to change the landscape of the platforms and infrastructure at Facebook.
Weifeng Zhong (Mercatus Center at George Mason University)
Weifeng Zhong explores a novel method to learn structural changes embedded in unstructured texts based on the Policy Change Index (PCI) framework developed by economists Julian Chan and Weifeng Zhong. He explains how an off-the-shelf application of deep learning—with an important twist—can help you detect structural break points in time series text data.
Learning languages is all the rage in this modern world with Swift, Rust, Python, and the like. But another modern language stands beside these marvels: Klingon. The language of the future! Qapla’! In this Ignite, we’ll learn the essential phrases for modern Klingon. Helpful for any artificial intelligence professional navigating the interstellar world.
Zhe Zhang (LinkedIn)
From people you may know (PYMK) to economic graph research, machine learning is the oxygen that powers how LinkedIn serves its 630M+ members. Zhe Zhang provides you with an architectural overview of LinkedIn’s typical machine learning pipelines complemented with key types of ML use cases.
Tobias Martens (whoelse.ai)
More than 50% of all interactions between humans and machines are expected to be speech-based by 2022. The challenge: Every AI interprets human language slightly different. Tobias Martens details current issues in NLP interoperability and uses Chomsky's theory of universal hard-wired grammar to outline a framework to make the human voice in AI universal, accountable, and computable.
Lyndon Leggate walks you through a step-by-step demonstration of how you can up level your reinforcement learning (RL) skills through autonomous driving.
Alasdair Allan (Babilim Light Industries)
The future of machine learning is on the edge and on small, embedded devices that can run for a year or more on a single coin-cell battery. Alasdair Allan dives deep into how using deep learning can be very energy efficient and allows you to make sense of sensor data in real time.
Your company has a large amount of data locked into thousands or millions of scanned paper documents. You'd like to extract and analyze it, but you first have to prove that your algorithm works and brings business value. Ciprian Tomoiaga explains how to start.
Manas Ranjan Kar (Episource)
Natural language processing (NLP) is hard, especially for clinical text. Manas Ranjan Kar explains the multiple challenges of NLP for clinical text and why it's so important that we invest a fair amount of time on domain-specific feature engineering. It’s also crucial to understand to diagnose an NLP model performance and identify possible gaps.
Ted Dunning (MapR)
Evaluating machine learning models is surprisingly hard, but it gets even harder because these systems interact in very subtle ways. Ted Dunning breaks the problem into operational and functional concerns and shows you how each can be done without unnecessary pain and suffering. You'll also get to see some exciting visualization techniques to help make the differences strikingly apparent.
Paris Buttfield-Addison (Secret Lab), Tim Nugent (lonely.coffee)
On-device ML and AI is the future for privacy-conscious, cloud-averse users of modern smartphones. Paris Buttfield-Addison and Tim Nugent explore what's possible using CoreML, Swift, and associated frameworks in tandem with the powerful ML-tuned silicon in modern Apple iOS hardware. They demonstrate and create ML and AI features with Swift to show how much you can do without touching the cloud.
Ganes Kesari (Gramener), Soumya Ranjan (Gramener)
In many countries, policy decisions are disconnected from data, and very few avenues exist to understand deeper demographic and socioeconomic insights. Ganes Kesari and Soumya Ranjan explain how satellite imagery can be a powerful aid when viewed through the lens of deep learning. When combined with conventional data, it can help answer important questions and show inconsistencies in survey data.
Michael Mahoney (UC Berkeley)
Developing theoretically principled tools to guide the use of production-scale neural networks is an important practical challenge. Michael Mahoney explores recent work from scientific computing and statistical mechanics to develop such tools, covering basic ideas and their use for analyzing production-scale neural networks in computer vision, natural language processing, and related tasks.
Emily Webber (Amazon Web Services)
If you've ever wondered if you could use AI to inform public policy, join Emily Webber as she combines classic economic methods with AI techniques to train a reinforcement learning agent on decades of randomized control trials. You'll learn about classic philosophical foundations for public policy decision making and how these can be applied to solve the problems that impact the many.
Jeff Jonas (Senzing)
Entity resolution—determining “who is who” and “who is related to whom”—is essential to almost every industry, including banking, insurance, healthcare, marketing, telecommunications, social services, and more. Jeff Jonas details how you can use a purpose-built real-time AI, created for general-purpose entity resolution, to gain new insights and make better decisions faster.
Zaid Tashman (Accenture Labs)
Today traditional approaches to predictive maintenance fall short. Zaid Tashman dives into a novel approach to predict rare events using a probabilistic model, the mixed membership hidden Markov model, highlighting the model's interpretability, its ability to incorporate expert knowledge, and how the model was used to predict the failure of water pumps in developing countries.
Jim Dowling (Logical Clocks), Ajit Mathews (AMD)
The Radeon open ecosystem (ROCm) is an open source software foundation for GPU computing on Linux. ROCm supports TensorFlow and PyTorch using MIOpen, a library of highly optimized GPU routines for deep learning. Jim Dowling and Ajit Mathews outline how the open source Hopsworks framework enables the construction of horizontally scalable end-to-end machine learning pipelines on ROCm-enabled GPUs.
Edward Oakes (UC Berkeley Electrical Engineering & Computer Sciences), Peter Schafhalter (UC Berkeley RISELab), Kristian Hartikainen (University of Oxford)
Edward Oakes, Peter Schafhalter, and Kristian Hartikainen take a deep dive into Ray, a new distributed execution framework for distributed AI applications developed by machine learning and systems researchers at RISELab, and explore Ray’s API and system architecture and sharing application examples, including several state-of-the-art distributed training, hyperparameter search, and RL algorithms.
Ahmed Kamal (Careem)
Every day Careem’s platform relies on machine learning (ML) in production to enable the movement of millions of its users. Ahmed Kamal outlines the challenges Careem faced while productionizing ML on scale and explains how to build an in-house ML platform that facilitates development and fast deployment of scalable ML services and accelerates the impact of ML everywhere.
Arun Kejariwal (Independent), Ira Cohen (Anodot)
Sequence to sequence (S2S) modeling using neural networks has become increasingly mainstream in recent years. In particular, it's been used for applications such as speech recognition, language translation, and question answering. Arun Kejariwal and Ira Cohen walk you through how S2S modeling can be leveraged for these use cases, visualization, real-time anomaly detection, and forecasting.
Douglas Calegari (Independent)
Douglas Calegari details a solution that classifies and routes emails coming into a busy insurance service center. Join in to discover how his team evaluated NLP models, leveraged various techniques to increase classification and entity recognition accuracy, designed a scalable end-to-end machine learning data pipeline, and integrated them into an existing transactional system.
Ready, set, network! Meet fellow attendees who are looking to connect at the AI Conference. We'll gather before Wednesday and Thursday keynotes for an informal speed networking event. Be sure to bring your business cards—and remember to have fun.
Ready, set, network! Meet fellow attendees who are looking to connect at the AI Conference. We'll gather before Wednesday and Thursday keynotes for an informal speed networking event. Be sure to bring your business cards—and remember to have fun.
Ready, set, network! Meet fellow attendees who are looking to connect at the AI Conference. We'll gather before Wednesday and Thursday keynotes for an informal speed networking event. Be sure to bring your business cards—and remember to have fun.
Ian Massingham (Amazon Web Services)
Reinforcement learning is an advanced machine learning technique that makes short-term decisions while optimizing for a longer-term goal through trial and error. Ian Massingham dives into state-of-the-art techniques in deep reinforcement learning for a variety of use cases.
Pramod Singh (Publicis Sapient), Akshay Kulkarni (Publicis Sapient)
An estimated 80% of data generated is an unstructured format, such as text, an image, audio, or video. Vijay Srinivas Agneeswaran, Pramod Singh, and Akshay Kulkarni explore how to create a language model that generates natural language text by implementing and forming a recurrent neural network and attention networks built on top of TensorFlow 2.0.
Robert Crowe (Google), Pedram Pejman (Google)
Putting together an ML production pipeline for training, deploying, and maintaining ML and deep learning applications is much more than just training a model. Robert Crowe and Pedram Pejman explore Google's TFX, an open source version of the tools and libraries that Google uses internally, made using its years of experience in developing production ML pipelines.
Martin Goodson (Evolution AI)
Data leakage occurs when the model gains access to data that it shouldn't have. AI systems can fail catastrophically in production if leakage is not dealt with properly. Martin Goodson details the four main manifestations of data leakage and explains how to recognize the warning signs. By mastering several key scientific principles, you can mitigate the risk of failure.
Cam Buscaron (Amazon Web Services)
As robots and AI systems become more prevalent in enterprise, industrial, and home settings, there's an increasing need for well-maintained, reliable, and secure development tools and frameworks for the next-generation production-grade robots and systems. Cam Buscaron explains how to leverage large-scale cloud simulation and the Robot Operating System (ROS) to build such systems.
Head of AI at BEN, Tyler Folkman, will cover the major milestones and historical events of AI, and discuss what the future might hold for AI in the Entertainment Industry. For example, is the singularity near? In the future, could AI just write my Ignite talk? Or perhaps, it already has.
Casey Dugan (IBM Research), Zahra Ashktorab (IBM Research)
Casey Dugan and Zahra Ashktorab challenge you to guess the backdoor of a hacked classifier. Join them to learn more about novel AI technologies through the design and development of engaging games. Take a look at their latest research around improving the interactions between humans and AI systems from empathy building to feedback design.
The AI revolution is poised to scale both machine and human knowledge. To generate that knowledge, companies must think differently about AI and how to deploy it. Alexis will cover the three “Be’s”, and how to approach AI systematically to truly harness knowledge at scale.
Ihab Ilyas (University of Waterloo)
Ihab Ilyas highlights the data-quality problem and describes the HoloClean framework, a state-of-the-art prediction engine for structured data with direct applications in detecting and repairing data errors, as well as imputing missing labels and values.
Topic Table discussions are a great way to informally network with people in similar industries or interested in the same topics.
Author Book Signings will be held in the O’Reilly booth during the conference. This is a great opportunity for you to meet O’Reilly authors and get a free copy of one of their books. Complimentary copies will be provided to the first 25 attendees. Limit one free book per attendee.
Ben Lorica (O'Reilly), Roger Chen (Computable), Alexis Crowell Helzer (Intel)
Program chairs Ben Lorica, Roger Chen, and Alexis Helzer open the second day of keynotes.
Danielle Deibler (MarvelousAI)
Danielle Deibler examines an approach to detecting bias, fine-grained emotional sentiment, and misinformation through the detection of political narratives in online media. As building blocks, the methodology uses human-in-the-loop, alongside other natural language processing and computational linguistics techniques, with examples focused on the 2020 US presidential election.
TraceHub is a platform that connects novel time-series analytics with datasets from different domains. Analytics and dataset owners can find insights in an automated setting and improve it. TraceHub can significantly reduce the time to find the true potential of budding time-series research and improve on it faster.
Danielle Dean (iRobot), Mathew Salvaris (Microsoft), Wee Hyong Tok (Microsoft)
Danielle Dean, Mathew Salvaris, and Wee Hyong Tok outline the recommended ways to train and deploy Python models on Azure, ranging from running massively parallel hyperparameter tuning using Hyperdrive to deploying deep learning models on Kubernetes.
Demand for AI compute is doubling every three months. Alexis Crowell Helzer explains why the way we compute AI has to be completely rethought so it can evolve to enable the promise of global business transformation.
Arash Ghazanfari (Dell Technologies)
As we look toward more demanding applications of artificial intelligence to unlock value from data, it's increasingly essential to develop a sustainable big data strategy and to efficiently scale artificial intelligence initiatives. Arash Ghazanfari covers the fundamental principles that need to be considered in order to achieve this goal.
Jewel James (Gojek), Mudit Maheshwari (Gojek)
GoFood, Gojek's food delivery product, is one of the largest of its kind in the world. Jewel James and Mudit Maheshwari explain how they prototyped the search framework that personalizes the restaurant search results by using ML to learn what constitutes a relevant restaurant given a user's purchasing history.
Sergey Ermolin (Amazon Web Services), Vineet Khare (Amazon Web Services)
Sergey Ermolin and Vineet Khare provide a step-by-step overview on how to implement, train, and deploy a reinforcement learning (RL)-based recommender system with real-time multivariate optimization. They show you how leverage RL to implement a recommender system that optimizes an advertisement message that promotes adoption of merchant's services.
Vanja Paunic (Microsoft)
Hyperparameter optimization for machine leaning is a complex task that requires advanced optimization techniques and can be implemented as a generic framework decoupled from the specific details of algorithms. We show how such a framework can be applied to learning unrelated tasks like object detection and text matching in a transparent, scalable, and easy to manage way in a cloud service.
Topic Table discussions are a great way to informally network with people in similar industries or interested in the same topics.
Author Book Signings will be held in the O’Reilly booth during the conference. This is a great opportunity for you to meet O’Reilly authors and get a free copy of one of their books. Complimentary copies will be provided to the first 25 attendees. Limit one free book per attendee.
Ben Lorica (O'Reilly), Roger Chen (Computable), Alexis Crowell Helzer (Intel)
Program chairs Ben Lorica, Roger Chen, and Alexis Helzer open the first day of keynotes.
Raffaello D’Andrea (Verity | ETH Zurich)
It's hard ignore the attention given to autonomy and robotics. The impact is significant and the reach is extensive, hitting transportation with self-driving cars, logistics and supply with mobile robots, and remote sensing applications with aerial vehicles or drones. Raffaello D'Andrea explores how autonomous indoor drones will drive the next wave of autonomous robotics development and growth.
Marta Kwiatkowska (University of Oxford)
Machine learning solutions are revolutionizing AI, but Marta Kwiatkowska explores their instability against adversarial examples—small perturbations to inputs that can catastrophically affect the output—which raises concerns about the readiness of this technology for widespread deployment.
James Fletcher (Grakn)
Statistical approaches alone are not sufficient to tackle the complexity of AI challenges today. Being smarter with the data we already have is critical to achieving machine understanding of any complex domain. James Fletcher explains how knowledge graph convolutional networks (KGCNs) demonstrate the usefulness of combining a connectionist deep learning approach with a symbolic approach.
Laurence Moroney (Google)
Laurence Moroney explores how to go from wondering what machine learning (ML) is to building a convolutional neural network to recognize and categorize images. With this, you'll gain the foundation to understand how to use ML and AI in apps all the way from the enterprise cloud down to tiny microcontrollers using the same code.

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

aisponsorships@oreilly.com

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires