Jonathan Mugan surveys the field of natural language processing (NLP), both from a symbolic and a subsymbolic perspective, arguing that the current limitations of NLP stem from computers having a lack of grounded understanding of our world. Jonathan then outlines ways that computers can achieve that understanding.
AI has the power to transform critical business processes, but new methods will be essential to analyze and visualize data—not as a one-time event but as a continuous process. As a result, a new computing paradigm and deep learning software stack will also be required to power, predict, and act on data to accelerate this transition and produce AI enterprise applications.
Paco Nathan explains how O'Reilly employs AI, from the obvious (chatbots, case studies about other firms) to the less so (using AI to show the structure of content in detail, enhance search and recommendations, and guide editors for gap analysis, assessment, pathing, etc.). Approaches include vector embedding search, summarization, TDA for content gap analysis, and speech-to-text to index video.
Lindsey Zuloaga explains how machine learning from video interviews is disrupting the human resources space, bringing top candidates to the attention of recruiters and drastically reducing the time and energy companies spend finding and assessing potential employees.
Artificial intelligence is playing an increasingly important role in new software products, but the workflow of an AI researcher is quite different from the workflow of the software developer. Peter Norvig explains how the two can come together.
Artificial intelligence has had a tremendous impact on various applications at Baidu, including speech recognition and autonomous driving, although the performance requirements for all of these applications are very different. Sharan Narang outlines the challenges in inference for deep learning models and different workloads and performance requirements for various applications.
Anmol Jagetia explains how to use OpenAI's Gym and Universe to design bots that can become extremely smart using reinforcement learning. You'll create a bot that uses reinforcement learning to beat games and learn how to reuse code to beat a set of games that includes Atari classics (Pac-Man or Pong), a Candy Crush clone, and a racing game.
Josh Tenenbaum explains how to build machines that learn and think like people.
Autonomous cars tend to treat people like obstacles whose motion needs to be anticipated so that the car can best stay out of their way, resulting in ultradefensive cars that can't coordinate with people. Anca Dragan demonstrates how learning and optimal control can be leveraged to generate car behavior that results in natural coordination strategies.
There has been a quantum leap in the performance of conversational AI. From speech recognition to machine translation and language understanding, deep learning made its mark. However, scaling and productizing these breakthroughs remains a big challenge. Yishay Carmiel shares techniques and tips on how to take advantage of large datasets, accelerate training, and create an end-to-end product.
Eric Greene compares different approaches to creating models that predict payment amounts, time, and recipient for recurring expenses such as rent, loans, utilities, and services, outlining the data requirements, feature modeling, and neural network architectures that work best, as well as common issues in training and deploying deep learning networks.
Pau Carré explains how Gilt is reshaping the fashion industry by leveraging the power of deep learning and GPUs to automatically detect similar products and identify facets in dresses.
Naveen Rao explains how Intel Nervana is evolving the AI stack from silicon all the way to the cloud so that true AI transformation can happen across every experience and every vertical.
Fraud in banking is an arms race with criminals using machine learning to improve their attack effectiveness. Ron Bodkin and Nadeem Gulzar explore how Danske Bank uses deep learning for better fraud detection, covering model effectiveness, TensorFlow versus boosted decision trees, operational considerations in training and deploying models, and lessons learned along the way.
Kristian Hammond shares a practical framework for understanding the role of AI technologies in problem solving and decision making, focusing on how they can be used, the requirements for doing so, and the expectations for their effectiveness.
Rakesh Chada introduces x.ai's Amy, an AI assistant that schedules meetings via email. Rakesh discusses Amy's architecture and the various challenges the team faced during its design and shares several machine learning approaches for intent classification. Rakesh concludes by exploring a novel method for error optimization in a conversational agent that exploits customer error tolerance.
Joseph Bradley and Xiangrui Meng share best practices for integrating popular deep learning libraries with Apache Spark, covering cluster setup, data ingest, configuring clusters, and monitoring jobs. Joseph and Xiangrui then demonstrate these techniques using Google’s TensorFlow library.
Ben Medlock explores the future of AI, explaining why the potential it holds is not at all frightening. Ben argues that the key to achieving elusive human-like AI lies in a central piece of the puzzle: embodiment.
The internet giants are fully embracing AI. The services they offer are all aimed at using data to draw a map of the world, and they are using AI to build disruptive approaches that can't be replicated by established enterprises, which are threatened by these disruptions. However, as Rene Buest explains, most leaders still underestimate the effect this will have on their businesses.
Amy Unruh offers a quick overview of machine learning on Google Cloud Platform and demonstrates a couple of the Google Cloud ML APIs. She then briefly highlights a few OSS TensorFlow models and explains how to use transfer learning to fine-tune them with your own data.
AI systems should not only propose solutions or answers but also explain why they make sense. Statistical machine learning is a powerful tool for discovering patterns in data, but, David Ferrucci asks, can it produce understanding or enable humans to justify and take reasoned responsibility for individual outcomes?
Doug Eck offers an overview of Magenta, a Google Brain project to develop new generative machine learning models for art and sound creation, allowing us to better understand how machine learning can be used by artists and musicians to make something new. Doug provides demos and explains where this work fits in with other AI research being done at Google and elsewhere.
As interactive and autonomous systems make their way into nearly every aspect of our lives, it is crucial to gain more trust in intelligent systems. Mark Hammond explores the latest techniques and research in building explainable AI systems. Join in to learn approaches for building explainability into control and optimization tasks, including robotics, manufacturing, and logistics.
Clara Labs is fusing machine learning (ML) with distributed human labor for natural language tasks. The result is a virtuous cycle: ML predictions improve workers’ efficiency, and workers help improve prediction models. Jason Laska explores the challenges of building a real-time(ish) knowledge workforce, how to integrate automation, and key strategies Clara Labs learned that enable scale.
Tuomas Sandholm offers an overview of Libratus—an AI that beat a team of four top specialist pros in heads-up no-limit Texas hold’em, which has 10^161 decision points—and explains how Strategic Machine is applying the domain-independent algorithms behind Libratus to a variety of imperfect-information games.
AI presents a huge opportunity for businesses to personalize and improve customer experiences and improve efficiency, but the technical complexity of AI puts it out of reach for most companies. Richard Socher explains how Salesforce is doing the heavy lifting to deliver seamless and scalable AI to its customers.
The promise of AI in the newsroom is contradictory: NLG revolutionizes news writing, but robot journalists threaten jobs; NLP improves fact-checking but requires investments that slimmed-down newsrooms cannot afford. Drawing on Norwegian AI startup Orbit’s experience, Codruta Gamulea explains how AI can help solve the industry resource constraints and improve the quality of journalism.
Damion Heredia explores how augmented intelligence is helping companies disrupt industries and enabling them to make better decisions.
It is imperative to make high-profile technologies like AI affordable in order for these technologies to proliferate and to benefit the general public. Shaoshan Liu discusses PerceptIn's road to affordable AI-capable products.
Kristian Hammond offers an overview of advanced natural language generation (NLG), a subfield of artificial intelligence, and the assorted technical systems involved with this emerging technology, along with the mechanisms that drive them.
The speed with which automation technologies are emerging today and the extent to which they could disrupt the world of work are largely without precedent. How big could the impact be on the world of work, and how rapidly will it be felt? Katy George explores these questions, drawing on a major new report from the McKinsey Global Institute.