How to track and manage TensorFlow 2.0 and Keras model experiments with MLflow
Level
ML development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools, and parameters to get the best results, and they need to track this information to reproduce their work. In addition, developers need to use many distinct systems to productionize models. To address these problems, many companies are building custom “ML platforms” that automate this lifecycle, but even these platforms are limited to a few supported algorithms and to each company’s internal infrastructure.
Juntai Zheng introduces MLflow, a new open source project from Databricks that aims to design an open ML platform where organizations can use any ML library and development tool of their choice to reliably build and share ML applications. MLflow introduces simple abstractions to package reproducible projects, track results, and encapsulate models that can be used with many existing tools, accelerating the ML lifecycle for organizations of any size.
With a short demo and code example, you’ll see a complete ML model lifecycle for TensorFlow 2.0 and Keras.
Prerequisite knowledge
- A basic understanding of Python and machine learning
What you'll learn
- See a demonstration of the complete ML model lifecycle with TensorFlow 2.0 and Keras
- Understand MLflow concepts and abstractions for models, experiments, and projects; using tracking APIs during model training; using MLflow UI to visually compare experimental runs with different tuning parameters and evaluate metrics; and integration with TensorBoard
Juntai Zheng
Databricks
Juntai Zheng is a software engineer at Databricks and is a member of the team developing MLflow. He has actively contributed to MLflow since its inception, including the TensorFlow support for MLflow projects. He develops the support for TensorFlow 2.0 in MLflow. Juntai holds a bachelor of arts degree from UC Berkeley in computer science.
Comments on this page are now closed.
Presented by
Diamond Sponsor
Elite Sponsors
Gold Sponsor
Supporting Sponsors
Premier Exhibitors
Exhibitors
Innovators
Contact us
confreg@oreilly.com
For conference registration information and customer service
partners@oreilly.com
For more information on community discounts and trade opportunities with O’Reilly conferences
sponsorships@oreilly.com
For information on exhibiting or sponsoring a conference
pr@oreilly.com
For media/analyst press inquires
Comments
Ideally I would like to find out more about low latency deployments and setups, one that autoscale and have a fast response time for generating forecasts.
Also, how this can be integrated in an AutoML setup along with other frameworks, including custom ones.