The ubiquity of sequential decision problems throughout computer science makes deep reinforcement learning one of the most exciting developments of modern AI. However, realizing the potential of such general frameworks in real applications has proven to be much more challenging.
Drawing on his work building and deploying an RL-based relational query optimizer, a core component of almost every database system, Sanjay Krishnan highlights some of the underappreciated challenges to implementing deep reinforcement learning. RL algorithms today do not fully exploit the structure of software simulators by collecting data episodically instead of strategically rewinding, fast-forwarding, and skipping. Further, they are very sensitive to policy parametrization especially in cases where there are hierarchical or discontinuous policy structures. RL algorithms also struggle in “overactuated” problems where the action space has significant redundancy. For all three of these challenges, Sanjay shares experimental results illustrating phenomena in practice, along with algorithmic solutions and overviews of the ways the same phenomena appear in other RL domains, such as robotics.
Sanjay Krishnan is an assistant professor of computer science at the University of Chicago. His research focuses on applications of machine learning and control theory to computer and cyberphysical systems problems. His work has received a number of awards including the 2016 SIGMOD Best Demonstration award, 2015 IEEE GHTC Best Paper award, and Sage Scholar award. Sanjay holds a PhD and master’s degree in computer science from UC Berkeley.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com