Mark Hammond explains how Bonsai’s platform enables every developer to add intelligence to their software or hardware, regardless of AI expertise. Bonsai’s suite of tools—a new programming language, AI engine, and cloud service—abstracts away the lowest-level details of programming AI, allowing developers to focus on concepts they want a system to learn and how those concepts can be taught.
In any human-machine interaction, you need a dialogue model: the machine must understand and be able to respond appropriately. Angela Zhou discusses x.ai's AI personal assistant, Amy Ingram, who schedules meetings for you, focusing on the way x.ai has approached both understanding and responding.
Francisco Webber offers a critical overview of current approaches to artificial intelligence using "brute force" (aka big data machine learning) as well as a practical demonstration of semantic folding, an alternative approach based on computational principles found in the human neocortex. Semantic folding is not just a research prototype—it's a production-grade enterprise technology.
Genevieve Bell explores the meaning of “intelligence” within the context of machines and its cultural impact on humans and their relationships. Genevieve interrogates AI not just as a technical agenda but as a cultural category in order to understand the ways in which the story of AI is connected to the history of human culture.
Machine learning is evolving to utilize new hardware, such as GPUs and large commodity clusters. Reza Zadeh presents two projects that have benefitted greatly through scaling: obtaining leading results on the Princeton ModelNet object recognition task and matrix computations and optimization in Apache Spark.
Anna Roth and Cristian Canton walk you through building a system to recognize emotions by inferring them from facial expressions. Cristian and Anna explain how they trained their emotion recognition CNN from noisy data and how to approach labeling subjective data like emotion with crowdsourcing before showing a demo of this work in action, as it is exposed in Microsoft’s Emotion API.
Open source software frameworks are the key for applying deep learning technologies. Orion Wolfe and Shohei Hido introduce Chainer, a Python-based standalone framework that enables users to intuitively implement many kinds of other models, including recurrent neural networks, with a lot of flexibility and comparable performance to GPUs.
Jay Wang and Jasmine Nettiksimmons explore the business model of Stitch Fix, an emerging startup that uses artificial intelligence and human experts for a personalized shopping experience, and highlight the challenges encountered implementing Stitch Fix's recommendation algorithm and interacting AI with human stylists.
Deep learning has made a major impact in the last three years. Imperfect interactions with machines, such as speech or image processing, have been made robust by deep learning that finds usable structure in large datasets. Naveen Rao outlines deep learning challenges and explores how changes to the organization of computation and communication can lead to advances in capabilities.
The high-level view of deep learning is elegant: composing differentiable components together trained in an end-to-end fashion. The reality isn't that simple, and the commonly used tools greatly limit what we are capable of doing. Diogo Almeida explains what we can do about it and offers a practical attempt at a deep learning library of the future.
Neural networks are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. Song Han explains how deep compression addresses this limitation by reducing the storage requirement of neural networks without affecting their accuracy and proposes an energy-efficient inference engine (EIE) that works with this model.
Pieter Abbeel explores deep reinforcement learning for robotics.
Natural-language assistants are the emergent killer app for AI. Getting from here to there with deep learning, however, can require enormous datasets. Christopher Nguyen and Binh Han explain how to shorten the time to effectiveness and the amount of training data that's required to achieve a given level of performance using human-in-the-loop active learning.
Urs Muller presents the architecture and training methods used to build an autonomous road-following system. A key aspect of the approach is eliminating the need for hand-programmed rules and procedures such as finding lane markings, guardrails, or other cars, thereby avoiding the creation of a large number of “if, then, else” statements.
Keynote by Shahin Farshchi
Alekh Agarwal explains why interactive learning systems that go beyond the routine train/test paradigm of supervised machine learning are essential to the development of AI agents. Along the way, Alekh outlines the novel challenges that arise at both the systems and learning side of things in designing and implementing such systems.
By building a marketplace for algorithms, Algorithmia gained unique experience with building and deploying machine-learning models using a wide variety of frameworks. Kenny Daniel shares the lessons Algorithmia learned through trial and error, the pros and cons of different deep learning frameworks, and the challenges involved with deploying them in production systems.
Aparna Chennapragada explores building data products at Google.
Aman Naimat and Mark Patel present an analysis of the current adoption of AI in industry based on a systematic study of the entire business Internet at over 500,000 companies. Drawing on this data, Aman and Mark offer a new economic framework to discover, measure, and motivate future use cases for AI.
The essence of intelligence is the ability to predict. Prediction, perception, planning/reasoning, attention, and memory are the pillars of intelligence. Yann LeCun describes several projects at FAIR and NYU on unsupervised learning, question answering with a new type of memory-augmented network, and various applications for vision and natural language understanding.
The automation of decisions and actions now threatens even knowledge-worker jobs. Tom Davenport describes both the threat of automation and the promise of augmentation—combining smart machines with smart people—and explores five roles that individuals can adopt to add value to AI, as well as what these roles mean for businesses.
Progress in enterprise AI workloads, particularly in deep learning, big data, and computing infrastructure, will profoundly impact productivity for users. XD Huang outlines enterprise AI examples to illustrate the collective efforts and exciting opportunities modern AI technologies are making possible.
Building reliable, robust software is hard. It is even harder when we move from deterministic domains (such as balancing a checkbook) to uncertain domains (such as recognizing speech or objects in an image). The field of machine learning allows us to use data to build systems in these uncertain domains. Peter Norvig looks at techniques for achieving reliability (and some of the other -ilities).
Pete Warden shows you how to train an object recognition model on your own images and then integrate it into a mobile application. Drawing on concrete examples, Pete demonstrates how to apply advanced machine learning to practical problems without the need for deep theoretical knowledge or even much coding.
The recent explosion of bots on communication platforms has rekindled the hopes of conversational AI. However, building intelligent and customizable bots is not just bottlenecked by NLP and speech recognition. Our biggest limitation is the inability to modularize the goals of human bot interconnection. Suman Roy explains why we need a layered architecture for bots to learn about us from data.
Greg Diamos and Sharan Narang discuss the impact of AI on applications within Baidu, including autonomous driving and speech recognition, offering a brief introduction to the challenges in training deep learning algorithms as well as the different workloads that are used in various deep learning applications.
Babak Hodjat discusses the progress in AI, diving into how AI can offer unique solutions in verticals such as investment, medical diagnosis, and ecommerce. Babak details how using massively scaled distributed evolutionary computation, mimicking biological evolution, allows an AI to learn, adapt, and react faster to provide customers with the answers and decisions they need.
We are entering a new computing paradigm—an era where software will write software. This is the biggest and fastest transition since the advent of the Internet. Big data and analytics brought us information and insight; AI and deep learning turn that insight into superhuman knowledge and real-time action. Jim McHugh shares real-world examples of companies solving problems once thought unsolvable.
Fostering diversity in the burgeoning AI community is a responsibility that falls upon all of us, not just corporate gatekeepers or data scientists with advanced technical degrees. Matt Zeiler unveils groundbreaking new technologies that will transform the way AI is “taught” and make both teaching and using AI accessible to anyone in the world.
Highly connected, interactive artificial intelligence systems surround us daily, but as smart as these systems are, they lack the ability to truly empathize with us humans. Rana El Kaliouby explores why emotion AI is critical to accelerating adoption of AI systems, how emotion AI is being used today, and what the future will look like.
There are many who fear that in the future, AI will do more and more of the jobs done by humans, leaving us without meaningful work. To believe this is a colossal failure of the imagination. Tim O'Reilly explains why we can't just use technology to replace people; we must use it to augment them so that they can do things that were previously impossible.