Presented By O’Reilly and Intel AI
Put AI to work
8-9 Oct 2018: Training
9-11 Oct 2018: Tutorials & Conference
London, UK

Building safe artificial intelligence with OpenMined

Andrew Trask (OpenMined)
11:05–11:45 Thursday, 11 October 2018
Location: Westminster Suite
Secondary topics:  Ethics, Privacy, and Security

What you'll learn

  • Learn the most important new techniques in secure, privacy-preserving, and multiowner governed artificial intelligence
  • Explore OpenMined, an open source project focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence

Description

Andrew Trask details the most important new techniques in secure, privacy-preserving, and multiowner governed artificial intelligence. Andrew begins with a sober, up-to-date view of the current state of AI safety, user privacy, and AI governance before introducing some of the fundamental tools of technical AI safety: homomorphic encryption, secure multiparty computation, federated learning, and differential privacy. He concludes with an exciting demo from the OpenMined open source project that illustrates how to train a deep neural network while both the training data and the model are in a safe, encrypted state during the entire process.

Photo of Andrew Trask

Andrew Trask

OpenMined

Andrew Trask is a PhD student at the University of Oxford, where he researches new techniques for technical AI safety. Andrew has a passion for making complex ideas easy to learn. As such, he is the author of the book Grokking Deep Learning, an instructor in Udacity’s Deep Learning nanodegree program, and the author of popular deep learning blog i am trask. He is also the leader of the OpenMined open source community, a group of over 3,000 researchers, practitioners, and enthusiasts, which extends major deep learning frameworks with open source tools for technical AI safety.