It’s no secret that machine learning workflows are awkward to deploy and hard to maintain and often cause friction with engineering and IT teams. Frequently, work done by data scientists and machine learning researchers is wasted because it never escapes their laptops or cannot be scaled to larger data.
Kubernetes—the container orchestration engine used by all of the top technology companies, including Google, Amazon, and Microsoft—was built from the ground up to run and manage highly distributed workloads on huge clusters. Thus, it provides a solid foundation for model development.
Daniel Whitenack demonstrates how to easily deploy and scale AI/ML workflows on any infrastructure using Kubernetes. You’ll learn how to containerize and deploy model training and inference on Kubernetes using popular open source tools like Pachyderm and KubeFlow and discover how to ingress/egress data, use version models, utilize GPUs, and track and evaluate models.
Outline:
Daniel Whitenack is a PhD-trained data scientist and engineer at Pachyderm. His industry experience includes developing data science applications, such as predictive models, dashboards, recommendation engines, and more, for large and small companies. Daniel has spoken at conferences around the world, including Applied ML Days, Spark Summit, PyCon, ODSC, and GopherCon. He maintains the Go kernel for Jupyter and is actively helping to organize contributions to various open source data science projects.
For exhibition and sponsorship opportunities, email aisponsorships@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of AI contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com