The Jupyter Notebook has become the de facto platform used by data scientists and AI engineers to build interactive applications and develop their AI/ML models. In this scenario, it’s very common to decompose various phases of the development into multiple notebooks to simplify the development and management of the model lifecycle.
Luciano Resende details how to schedule together these multiple notebooks that correspond to different phases of the model lifecycle into notebook-based AI pipelines and walk you through scenarios that demonstrate how to reuse notebooks via parameterization.
Luciano Resende is a senior technical staff manager (STSM) and open source data science and AI platform architect at IBM CODAIT (formerly Spark Technology Center). He’s a member of ASF, where he’s been contributing to open source for over 10 years. He contributes to various big data-related Apache projects around the Apache Spark ecosystem as well as Jupyter ecosystem projects, building a scalable, secure, and flexible enterprise data science platform.
For exhibition and sponsorship opportunities, email oscon@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of OSCON contacts
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com