As artificial intelligence (and specifically machine learning) firmly takes hold in industry, there has been a significant increase in the amount of AI snake oil being developed, pitched, and sold. Joshua Joseph shares a practical guide for detecting AI products of questionable value or benefit, whether intentional or not.
Keynote by Steve Jurvetson
Much like the rise of electricity, which started about 100 years ago, AI will revolutionize every major industry. Andrew Ng explains how AI can transform your business, shares major technology trends and thoughts on where your biggest future opportunities may lie, and explores best practices for incorporating AI, machine learning, and deep learning into your organization.
Lili Cheng shares two examples of AI inspired by nature. In the first, Microsoft researchers created a system that uses artificial intelligence that draws on the way birds fly to keep a sailplane aloft. The second explores what makes people unique, our language instinct, and our ability to model how people socialize and accomplish work.
Paco Nathan explains how O'Reilly employs AI, from the obvious (chatbots, case studies about other firms) to the less so (using AI to show the structure of content in detail, enhance search and recommendations, and guide editors for gap analysis, assessment, pathing, etc.). Approaches include vector embedding search, summarization, TDA for content gap analysis, and speech-to-text to index video.
Mercy and Intermountain, two of the largest and most innovative hospital systems in the United States, have recently applied AI to tackle clinical variation within their systems. Todd Steward and Lonny Northrup discuss the application of machine intelligence for optimizing care and provide valuable insights into practice variation for improving clinical pathways.
Deep learning is used broadly at the forefront of research, achieving state-of-the-art results across a variety of domains. However, that doesn't mean it's a fit for all tasks—especially when the constraints of production are considered. Stephen Merity investigates what tasks deep learning excels at, what tasks trigger a failure mode, and where current research is looking to remedy the situation.
Google has invested deeply in machine learning for many years and is using it successfully across its highly successful consumer businesses. Philippe Poutonnet explains how to leverage the power of ML with Google Cloud, using the platform's powerful data management tools, support for collaborative experiments, and predictions at Google scale.
Tools, frameworks, access to high-value data, and practical approaches to deployment and integration with existing systems and applications are just some of the considerations facing companies adopting deep learning. Ron Bodkin explores tools, open source technology, frameworks, and strategies to cost-effectively achieve strategic results with deep learning in the enterprise.
Abu Qader’s personal experience is a testament to the increasing impact and accessibility of AI technology. As a high school student, he taught himself machine learning using open online resources and launched an AI company for breast cancer diagnostics. Peter Norvig sits down with Abu to share anecdotes, discuss the state of artificial intelligence, and explore where things are heading.
Mark Hammond explores how enterprises can move beyond games and leverage deep reinforcement learning and simulation-based training to build programmable, adaptive, and trusted AI models for their real-world applications.
Ruchir Puri explores the opportunities and challenges of AI for business, focusing on what is needed to truly scale out AI applications and systems across the breadth of enterprises.
The field of artificial intelligence has made major strides in recent years, but there is a growing movement to consider the implications of machines that can rival humans in general problem-solving abilities. Nate Soares outlines the underresearched fundamental technical obstacles to building AI that can reliably learn to be "aligned" with human values.
Kenneth Stanley offers an overview of the field of neuroevolution, an emerging paradigm for training neural networks through evolutionary principles that has grown up alongside more conventional deep learning, highlighting major algorithms such as NEAT, HyperNEAT, and novelty search, the field's emerging synergies with deep learning, and promising application areas.
Join Naveen Rao and Steve Jurvetson for a fireside chat.
Many new theoretical challenges have arisen in the area of gradient-based optimization for large-scale data analysis, driven by the needs of applications and the opportunities provided by new hardware and software platforms. Michael Jordan shares recent research on the avoidance of saddle points in high-dimensional nonconvex optimization.
With the chaotic and rapidly evolving landscape around deep learning, we need deep learning-specific compilers to enable maximum performance in a wide variety of use cases on a wide variety of hardware platforms. Jason Knight offers an overview of the Intel Nervana Graph project, which was designed to solve this problem.
Marcos Campos offers an overview of reinforcement learning, walking you through the various classes of reinforcement learning algorithms, the types of problems that can be solved with this technique, and how to build and train AI models using reinforcement learning and reward functions.
Taxes are one of consumers' most complex financial transactions, thanks to a tax code that is 80,000 pages long. Gang Wang explains how Intuit built the industry’s only Tax Knowledge Engine, a constraint-based engine that encodes changing financial regulations and provides the foundation for a host of artificial intelligence technologies that save customers time and money.
John Whalen explores the concept of cognitive design, describing how humans structure their commands to AI systems (syntax, word usage, prosody) and how to measure human reactions to AI responses using biometrics (facial emotion recognition, heart rate, GSR). Along the way, John shares insights into how to optimally architect the customer experience.
Tim O’Reilly draws on lessons from networked platforms to show how our economy and financial markets have also become increasingly managed by algorithms, making the case that income inequality, declining upward mobility, and job losses due to technology are not inevitable; they are the result of design choices we have made in the algorithms that manage our markets.
Andy Steinbach shares case studies and applications in artificial intelligence that are having an impact on financial markets.
Rana el Kaliouby lays out a vision for an emotion-enabled world of technology, sharing the inner workings of a multimodal emotion sensing platform that identifies emotions through facial expressions and tone of voice. Along the way, Rana explores the broad applications and ethical implications of this technology.
Program chairs Ben Lorica and Roger Chen kick off the O'Reilly AI Conference in San Francisco with an overview of the current trends they have observed in the industry.
Jia Li has contributed to some of the most influential datasets in the world and helped transform computer vision from an academic niche into a dominant technology. Jia explains why a democratized approach to AI ensures that the compute, data, algorithms, and talent behind these technologies reach the widest possible audience.
Bruno Gonçalves explores word2vec and its variations, discussing the main concepts and algorithms behind the neural network architecture used in word2vec and the word2vec reference implementation in TensorFlow. Bruno then presents a bird's-eye view of the emerging field of "anything"-2vec methods that use variations of the word2vec neural network architecture.