Mar 15–18, 2020

Schedule: ML in Production sessions

Add to your personal schedule
11:00am11:40am Tuesday, March 17, 2020
Location: LL21 E
Holden Karau (Independent)
Trevor Grant and Holden Karau make sure you can get and keep your models in production with Kubeflow. Read more.
Add to your personal schedule
11:50am12:30pm Tuesday, March 17, 2020
Location: LL21 E
Shubhankar Jain (SurveyMonkey), Aliaksandr Padvitselski (SurveyMonkey), Manohar Angani (SurveyMonkey)
Every organization leverages ML to increase value to customers and understand their business. You may have created models, but now you need to scale. Shubhankar Jain, Aliaksandr Padvitselski, and Manohar Angani use a case study to teach you how to pinpoint inefficiencies in your ML data flow, how SurveyMonkey tackled this, and how to make your data more usable to accelerate ML model development. Read more.
Add to your personal schedule
1:45pm2:25pm Tuesday, March 17, 2020
Location: LL21 E
Kelley Rivoire (Stripe)
Tools for training and optimizing models have become more prevalent and easier to use; however, these are insufficient for deploying ML in critical production applications. Kelley Rivoire dissects how Stripe approached challenges in developing reliable, accurate, and performant ML applications that affect hundreds of thousands of businesses. Read more.
Add to your personal schedule
2:35pm3:15pm Tuesday, March 17, 2020
Location: LL21 E
Secondary topics:  Cloud Platforms and SaaS
Rustem Feyzkhanov (Instrumental)
Machine learning (ML) and deep learning (DL) are becoming more and more essential for businesses in internal and external use; one of the main issues with deployment is finding the right way to train and operationalize the model. Rustem Feyzkhanov digs into how use AWS infrastructure to use a serverless approach for deep learning, providing cheap, simple, scalable, and reliable architecture. Read more.
Add to your personal schedule
4:15pm4:55pm Tuesday, March 17, 2020
Location: LL21 E
David Talby (Pacific AI)
The industry has about 40 years of experience forming best practices and tools for storing, versioning, collaborating, securing, testing, and building software source code—but only about 4 years doing so for AI models. David Talby catches you up on current best practices and freely available tools so your team can go beyond experimentation to successfully deploy models. Read more.
Add to your personal schedule
11:00am11:40am Wednesday, March 18, 2020
Location: LL21 E
Moty Fania (Intel)
Moty Fania shares key insights from implementing and sustaining hundreds of ML models in production, including continuous delivery of ML models and systematic measures to minimize the cost and effort required to sustain them in production. You'll learn from examples from different business domains and deployment scenarios (on-premises, the cloud) covering the architecture and related AI platforms. Read more.
Add to your personal schedule
11:50am12:30pm Wednesday, March 18, 2020
Location: LL21 E
Alice Zheng (Amazon)
You'll learn four lessons in building and operating large-scale, production-grade machine learning systems at Amazon with Alice Zheng, useful for practitioners and would-be practitioners in the field. Read more.
Add to your personal schedule
1:45pm2:25pm Wednesday, March 18, 2020
Location: LL21 E
Ananth Kalyan Chakravarthy Gundabattula (Commonwealth Bank of Australia)
Feature engineering can make or break a machine learning model. The featuretools package and associated algorithm accelerate the way features are built. Ananth Kalyan Chakravarthy Gundabattula explains a Dask- and Prefect-based framework that addresses challenges and opportunities using this approach in terms of lineage, risk, ethics, and automated data pipelines for the enterprise. Read more.
Add to your personal schedule
2:35pm3:15pm Wednesday, March 18, 2020
Location: LL21 E
Jay Budzik (Zest AI)
More companies are adopting machine learning (ML) to run key business functions. The best-performing models combine diverse model types into stacked ensembles, but explaining these hybrid models has been impossible—until now. Jay Budzik details a new technique, generalized integrated gradients (GIG), to explain complex ensembled ML models that are safe to use in high-stakes applications. Read more.

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires