Presented By O’Reilly and Intel Nervana
Put AI to work
September 17-18, 2017: Training
September 18-20, 2017: Tutorials & Conference
San Francisco, CA

Ensuring smarter-than-human intelligence has a positive outcome.

Nate Soares (MIRI)
2:35pm–3:15pm Wednesday, September 20, 2017
Implementing AI
Location: Franciscan AB
Secondary topics:  Law, ethics and governance (including AI safety), Tools and frameworks

Prerequisite Knowledge

  • A basic understanding of AI, AGI, and superintelligence concepts and terminology

What you'll learn

  • Understand the fundamental difficulty of AI alignment, the current underserved areas in AI safety research, and the justification for formalizing open problems in mathematical language


The field of artificial intelligence has made major strides in recent years. The social and cultural prospects of general artificial intelligence are often left to philosophers, ethicists, and science fiction authors to hash out, with relatively little involvement from researchers in the field. As a result, discussion of the field’s long-term impact is often divorced from the field’s realities and constraints and pays too little attention to the key technical questions for ensuring that AI systems are safe and reliable in practice. But there is a growing movement instigated by luminaries in science and industry to consider the implications of machines that can rival humans in general problem-solving abilities and our capacity to reason, learn, and devise plans.

Nate Soares argues that the key questions for developing safe general AI are not always the obvious ones. Intuitively, we would expect more capable systems to be better at doing what we want and to therefore be safer than less capable systems. In many cases, however, capability gains (e.g., higher scores in machine learning tasks) can introduce surprising new failure modes. There are a number of underresearched technical obstacles to building machines that can reliably learn to promote our goals over time, over and above the technical obstacles to building machines that can reliably learn about strictly factual questions. Building on work by Nick Bostrom, Stuart Russell, and others, Nate outlines four basic propositions about general AI systems and shares current research into the problem of aligning such systems with our values.

Photo of Nate Soares

Nate Soares


Nate Soares is the executive director of the Machine Intelligence Research Institute (MIRI), a Berkeley research nonprofit focused on the challenge of making superhumanly capable AI systems robust and reliable. Nate first joined MIRI as a research fellow, during which time he was the primary author of the organization’s technical agenda. He has contributed to ongoing work in decision theory, game theory, algorithmic information theory, computational reflection, online machine learning, mathematical logic, and a number of other areas. Previously, Nate worked as a software engineer at Google.