Andrew Trask details the most important new techniques in secure, privacy-preserving, and multiowner governed artificial intelligence. Andrew begins with a sober, up-to-date view of the current state of AI safety, user privacy, and AI governance before introducing some of the fundamental tools of technical AI safety: homomorphic encryption, secure multiparty computation, federated learning, and differential privacy. He concludes with an exciting demo from the OpenMined open source project that illustrates how to train a deep neural network while both the training data and the model are in a safe, encrypted state during the entire process.
Andrew Trask is a PhD student at the University of Oxford, where he researches new techniques for technical AI safety. Andrew has a passion for making complex ideas easy to learn. As such, he is the author of the book Grokking Deep Learning, an instructor in Udacity’s Deep Learning nanodegree program, and the author of popular deep learning blog i am trask. He is also the leader of the OpenMined open source community, a group of over 3,000 researchers, practitioners, and enthusiasts, which extends major deep learning frameworks with open source tools for technical AI safety.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org