ML ops: Applying DevOps practices to machine learning workloads
Who is this presentation for?
- Data scientists, DevOps practitioners, and automation engineers
Gartner’s 2017 report “Predicts 2017: Analytics Strategy and Technology” estimates that by 2020, 40% of all data science tasks will be automated. While the movement toward bringing the full potential of AI to the market faster is promising, there’s a balance between automation and quality that needs to be maintained. Applying DevOps practices to machine learning workloads not only brings models to the market faster but also maintains the quality and integrity of those models. Sireesha Muppala, Shelbee Eigenbrode, and Randall DeFauw explore how to apply DevOps practices to machine learning workloads and demonstrate a CI/CD pipeline using a managed machine learning service and cloud-based development services.
While the concept of applying DevOps practices to machine learning workloads is not necessarily novel, it’s still very immature in practice. Business stakeholders frequently look for guidance on how to increase time to market and ensure models make it out of the lab. It’s estimated that a large number of models created never get deployed to production. To be able to realize the full potential of AI, it’s critical to ensure the models that get developed have a full path to production. That full path must include quality gates and varying levels of model evaluation using practices that are unique to machine learning workloads.
Sireesha, Shelbee, and Randall dive into the following areas to ensure you take away key practices that you can apply to your existing workloads: DevOps 101—a base foundation of DevOps practices, including what DevOps is and what DevOps isn’t; ML ops—the practices behind MLOps and step-by-step guidance on how to apply DevOps practices to machine learning workloads, leading to a continuous integration/continuous Delivery (CI/CD) pipeline targeted for machine learning; and demonstrable CI/CD pipeline using AWS services. The demonstrable CI/CD pipeline will use AWS services; however, this is largely agnostic of specific technologies so you can apply the practices regardless of technology or platform.
- General knowledge of how a model is created (useful but not required)
What you'll learn
- Understand why ML ops is important, the steps to take to begin creating a CI/CD pipeline regardless of the underlying platform and technologies
Amazon Web Services
Sireesha Muppala is a solutions architect at Amazon Web Services. Her area of depth is machine learning and artificial intelligence, and she provides guidance to AWS customers on their ML and AI workloads. She led the Colorado University team to win and successfully complete a two-year research grant from the Air Force Research Lab on “Autonomous Job Scheduling in Unmanned Arial Vehicles.” She’s an experienced public speaker and has presented research papers at international conferences, such as CoSAC: Coordinated Session-Based Admission Control for Multi-Tier Internet Applications at the IEEE International Conference on Computer Communications and Networks (ICCCN) and Regression Based Multi-Tier Resource Provisioning for Session Slowdown Guarantees at the IEEE International Conference Performance, Computing and Communications (IPCCC). She’s published technical articles, such as Coordinated Session-Based Admission Control with Statistical Learning for Multi-Tier Internet Applications in the Journal of Network and Computer Applications (JNCA), Regression-Based Resource Provisioning for Session Slowdown Guarantee in Multi-Tier Internet Servers, and Multi-Tier Service Differentiation: Coordinated Resource Provisioning and Admission Control in the Journal of Parallel and Distributed Computing (JPDC). Sireesha earned her PhD and postdoctorate from the University of Colorado, Colorado Springs, while working full time. Her PhD thesis is Multi-Tier Internet Service Management Using Statistical Learning Techniques.
Amazon Web Services
Shelbee Eigenbrode is a solutions architect at Amazon Web Services (AWS). Her current areas of depth include DevOps combined with machine learning and artificial intelligence. She’s been in technology for 22 years, spanning multiple roles and technologies. Previously, she spent 20+ years at IBM. She’s a published author, blogger, and vlogger evangelizing DevOps practices with a passion for driving rapid innovation and optimization at scale. In 2016, she won the DevOps dozen blog of the year demonstrating what DevOps is not. With over 26 patents granted across various technology domains, her passion for continuous innovation combined with a love of all things data recently turned her focus to data science. Combining her backgrounds in data, DevOps, and machine learning, her passion is helping customers embrace data science and ensure all data models have a path to use. She also aims to put ML in the hands of developers and customers who are not classically trained data scientists.
Amazon Web Services
Randy DeFauw is a solutions architect at AWS, with over 20 years of experience in enterprise software architecture. He worked heavily in DevOps in the past and now focuses on analytics and machine learning.
Comments on this page are now closed.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires