Russ Salakhutdinov discusses some of the key challenges to making machines more intelligent, focusing on the Gated-Attention (GA) Reader model, which integrates a multihop architecture with a novel attention mechanism, along with extensions that make use of external linguistic knowledge. Along the way, Russ explains why memory is a crucial aspect of an intelligent agent’s ability to plan and reason in partially observable environments and demonstrates a deep reinforcement learning agent that can learn to store arbitrary information about the environment over long time lags.
Ruslan Salakhutdinov is an associate professor in the Machine Learning Department at Carnegie Mellon University. Previously, he was an assistant professor in the Departments of Statistics and Computer Science at the University of Toronto and and spent two years as a postdoc at the Massachusetts Institute of Technology’s Artificial Intelligence Lab. Ruslan’s primary interests are deep learning, machine learning, and large-scale optimization. He is an action editor of the Journal of Machine Learning Research and has served on the senior program committee of several learning conferences, including NIPS and ICML. He is an Alfred P. Sloan research fellow, Microsoft Research faculty fellow, Canada research chair in statistical machine learning, a senior fellow of the Canadian Institute for Advanced Research, and a recipient of the Early Researcher Award, Google Faculty Award, and the NVIDIA Pioneers of AI award. Ruslan holds a PhD in computer science from the University of Toronto.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com