Modern deep learning: Tools and techniques
Who is this presentation for?
- DL developers, researchers, and group managers
- ML engineers
Success with deep learning (DL) requires understanding more than just TensorFlow or Keras. When organizations first begin to deploy deep learning, they’re often faced with a similar set of challenges: their DL developers might understand how to train a single model in principle, but they may not be able to make DL work in practice.
You may run into some common questions on topics such as running DL jobs on a GPU cluster and sharing the cluster among a team of researchers, tuning the hyperparameters of your models, distributed training, storing training and validation metrics, reproducible DL training, deploying models to production, and improving the inference performance of your DL models, particularly for resource-constrained environments like mobile and edge deployments. And these questions often require extensive research. The software tools in these domains are typically highly technical, poorly documented, and hard to interoperate with one another—and the landscape changes quickly. For most organizations, considerable effort is required to integrate a collection of narrow technical tools into a comprehensive DL environment.
Neil Conway and Yoav Zimmerman provide you with an overview of these challenges, summarize relevant research and state-of-the-art algorithms where appropriate, and discuss popular software tools. You’ll have the opportunity to work through several hands-on examples of how to use the software tools to solve practical DL challenges.
Success in DL is about more than just training a single model with TensorFlow. You’ll know the common pitfalls that organizations face when adopting DL and have a summary of best practices and software tools for dealing with these pitfalls.
- A basic knowledge of deep learning
Materials or downloads needed in advance
- A laptop (you'll be provided access to a compute environment on AWS.)
What you'll learn
- Learn about hyperparameter tuning and state-of-the-art algorithms, popular tools for distributed training, and challenges and TensorFlow serving for DL deployment
- Understand why GPU scheduling is hard and how to deploy DL jobs on Kubernetes
- Gain keys for making DL workloads reproducible and making concepts and tools to optimize DL models for resource-constrained environments
Neil Conway is the cofounder and CTO of Determined AI, a startup building software to make deep learning developers dramatically more productive. Previously, Neil was a technical lead at Mesosphere and earned a PhD in computer science from the University of California, Berkeley. Neil has also been a leader and major contributor to several notable open source projects, including Apache Mesos and PostgreSQL.
Yoav Zimmerman is a software engineer at Determined AI, where he works closely with leading organizations to help them apply deep learning successfully using Determined AI’s cutting-edge software. Previously, Yoav worked on knowledge representation at Google. He holds a BSc from UCLA.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Diversity and Inclusion Sponsor
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
View a complete list of O'Reilly AI contacts