Scalable AI and reinforcement learning with Ray
Who is this presentation for?
- Machine learning researchers and practitioners and data scientists
The demands of the modern AI applications continue to grow exponentially, while the improvements in hardware (especially memory capacity) are slowing down, leaving no choice but to scale out these applications. These applications include distributed training, hyperparameter search, and reinforcement learning (RL). These applications have already shown remarkable results, such as disease diagnosis algorithms outperforming medical experts, voice assistants indistinguishable from humans, and AlphaGo beating the world Go champion. However, these applications pose a new set of requirements, the combination of which creates a challenge for existing distributed execution frameworks: computation with millisecond latency at high throughput, adaptive construction of arbitrary task graphs, and execution of heterogeneous kernels over diverse sets of resources.
Kristian Hartikainen, Edward Oakes, Peter Schafhalter lead a deep dive into Ray, a new distributed execution framework for distributed AI applications developed by machine learning and systems researchers at UC Berkeley’s RISELab. You’ll walk through Ray’s API and system architecture and explore application examples, including several state-of-the-art distributed training, hyperparameter search, and RL algorithms.
- Familiarity with Python, basic machine learning concepts, and reinforcement learning
Materials or downloads needed in advance
- A laptop
What you'll learn
- Learn the basics of Ray and how to develop simple ML and RL applications on top of Ray and its libraries, RLlib and Tune
UC Berkeley Electrical Engineering & Computer Sciences
Edward Oakes is a second-year PhD student at UC Berkeley and a contributor to the Ray project. Previously, he worked on isolation mechanisms for serverless computing and infrastructure for microservice deployments.
UC Berkeley RISELab
Peter Schafhalter is a first-year PhD student at UC Berkeley’s RISELab. His focus is AI systems, which involves writing software that makes AI run quickly, securely, explainably, and in a way that’s resilient to failures. He’s building an operating system for self-driving cars based on Ray.
University of Oxford
Kristian Hartikainen is a visiting scholar in the Robotics and AI Lab (RAIL) at UC Berkeley, working with Sergey Levine and Tuomas Haarnoja, and will begin his PhD studies at the University of Oxford with Simon Whiteson in fall 2019. His research focus is on the development of model-free deep reinforcement learning algorithms for robotic control. He’s also working on Ray RLlib, a scalable reinforcement learning library, and Ray Tune, a distributed framework for model training. Kristian is the author and maintainer of Softlearning, the official soft actor-critic project. Previously, he spent several years as a software engineer working on statistical analysis and machine learning products at Statwing and Qualtrics.
Comments on this page are now closed.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires