Presented By O’Reilly and Intel AI
Put AI to Work
April 29-30, 2018: Training
April 30-May 2, 2018: Tutorials & Conference
New York, NY

Using Cognitive Toolkit (CNTK) and TensorFlow with Kubernetes clusters

Danielle Dean (iRobot), Wee Hyong Tok (Microsoft)
11:55am–12:35pm Tuesday, May 1, 2018
Implementing AI
Location: Sutton North/Center
Average rating: *****
(5.00, 1 rating)

Who is this presentation for?

  • Data scientists, developers, and DevOp engineers

Prerequisite knowledge

  • A basic understand of containers and deep learning

What you'll learn

  • Learn how to use Kubernetes clusters for deep learning training and serving needs, autoscale the Kubernetes cluster based on training and serving requirements, and build an AI application backed by a Kubernetes cluster

Description

Deep learning has fueled the emergence of many practical applications and experiences and has played a central role in making many recent breakthroughs possible—from speech recognition that’s reached human parity in word recognition during conversations to neural networks that are accelerating the creation of highly precise land cover datasets to predicting vision impairment, regression rates, and eye diseases, among others.

The Microsoft Cognitive Toolkit (CNTK) is the behind-the-scenes magic that makes it possible to train deep neural networks that address a very diverse set of needs, such as in the scenarios above. CNTK lets anyone develop and train their deep learning model at massive scale. Successful deep learning projects also require a few critical infrastructure ingredients—specifically, an infrastructure that enables teams to perform rapid experimentation and scale elastically based on the deep learning training requirements.

Meanwhile, container technologies have been maturing (and more enterprises are using containers in their IT environments). Containers enable organizations to simplify the development and deployment of applications in various environments (e.g., on-premises, public cloud, hybrid cloud, etc.). Various container orchestration and management technologies are now available, including Docker Swarm, Kubernetes, and Mesosphere Marathon.

Kubernetes is an open source technology that makes it easier to automate deployment, scaling, and management of containerized applications. The ability to use GPUs with Kubernetes allows the clusters to facilitate running frequent experimentations, using it for high-performing serving and autoscaling of deep learning models and much more. Join Wee Hyong and Danielle Dean as they walk you through using Kubernetes clusters for deep learning.

Photo of Danielle Dean

Danielle Dean

iRobot

Danielle Dean is the technical director of machine learning at iRobot. Previously, she was a principal data science lead at Microsoft. She holds a PhD in quantitative psychology from the University of North Carolina at Chapel Hill.

Photo of Wee Hyong Tok

Wee Hyong Tok

Microsoft

Wee Hyong Tok is a principal data science manager with the AI CTO Office at Microsoft, where he leads the engineering and data science team for the AI for Earth program. Wee Hyong has worn many hats in his career, including developer, program and product manager, data scientist, researcher, and strategist, and his track record of leading successful engineering and data science teams has given him unique superpowers to be a trusted AI advisor to customers. Wee Hyong coauthored several books on artificial intelligence, including Predictive Analytics Using Azure Machine Learning and Doing Data Science with SQL Server. Wee Hyong holds a PhD in computer science from the National University of Singapore.