Allison Duettmann offers an overview of AI philosophy and explains why traditional approaches need updating, because they pave the way for a singleton AI that will hardly be benevolent. Allison then discusses potential alternative AI safety strategies and their shortcomings and shares a brief survey of interesting problems in AI safety and what we can hope for if we get it right.
While this session focuses on a positive long-term future, most ideas should be relevant for current AI, blockchain, and computer security strategies.
Allison Duettmann is an AI safety researcher at the Foresight Institute, where she conducts research and coordinates the institute’s technical programs. Her research focuses on the reduction of existential risks, especially from artificial general intelligence. At Existentialhope.com, she keeps an index of readings, podcasts, organizations, and people that inspire an optimistic long-term vision for humanity. The index is collaborative and for everyone who wants to improve the world but doesn’t know where to start. Allison speaks and moderates panels on existential risks and existential hope, AI safety, longevity, the blockchain, ethics in technology, and more. Previously, she hosted and planned TEDx talks, panels, workshops, debates, and conferences for governments, companies, think tanks, NGOs, and the public in Germany, France, Colombia, the UK, and the US. Allison holds an MS in philosophy and public policy (summa cum laude) from the London School of Economics, where she developed a moral framework for artificial general intelligence that relies on natural language processing.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com