Deep learning methods have achieved impressive results across a range of passive perception domains, from computer vision to speech recognition and natural language processing. However, these successes have been limited to problems that satisfy two critical properties: first, they rely on the availability of large amounts of labeled data, and second, they rely on the assumption that test points are independent and uniform samples rather than time steps in a sequential decision process.
Sergey Levine shares techniques in reinforcement learning that allow you to tackle sequential decision-making problems that arise across a range of real-world deployments of artificial intelligence systems, with a particular focus on examples in robotics. Sergey then explains how emerging technologies in meta-learning make it possible for deep learning systems to learn from even small amounts of data, using past experience of learning other tasks.
Sergey Levine is a professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. His research focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms, and includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more. Applications of this work include autonomous robots and vehicles and computer vision and graphics. Sergey’s work has been featured in many popular press outlets, including the New York Times, the BBC, MIT Technology Review, and Bloomberg Business. Sergey holds a BS, MS, and PhD in computer science, all from Stanford University.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com