9:00am - 5:00pm Monday, April 15 & Tuesday, April 16
PyTorch is a machine learning library for Python that allows users to build deep neural networks with great flexibility. Its easy-to-use API and seamless use of GPUs make it a sought-after tool for deep learning. Ana Hocevar introduces the PyTorch workflow and demonstrates how to use it to build deep learning models using real-world datasets.
Read more.
9:00am - 5:00pm Monday, April 15 & Tuesday, April 16
SOLD OUT
The TensorFlow library provides for the use of computational graphs, with automatic parallelization across resources. This architecture is ideal for implementing neural networks. Dylan Bargteil walks you through TensorFlow's capabilities in Python, teaching you how to build machine learning algorithms piece by piece and use the Keras API provided by TensorFlow with several hands-on applications.
Read more.
9:00am - 5:00pm Monday, April 15 & Tuesday, April 16
Francesca Lazzeri, Wee Hyong Tok, and Krishna Anumalasetty walk you through the core steps for using Azure Machine Learning services to train your machine learning models both locally and on remote compute resources.
Read more.
9:00am - 5:00pm Monday, April 15 & Tuesday, April 16
SOLD OUT
Delip Rao and Brian McMahan explore natural language processing with deep learning, walking you through neural network architectures and NLP tasks and teaching you how to apply these architectures for those tasks.
Read more.
9:00am–12:30pm Tuesday, April 16, 2019
Mo Patel leads a deep dive into all aspects of the PyTorch lifecycle via hands-on examples such as image classification, text classification, and linear modeling. Along the way, you'll explore other aspects of machine learning such as transfer learning, data modeling, and deploying to production with immersive labs.
Read more.
9:00am–12:30pm Tuesday, April 16, 2019
Gunnar Carlsson explains how to use topological data analysis to describe the functioning and learning of a neural network in a compact and understandable way—resulting in material speedups in performance (training time and accuracy) and enabling data-type customization of neural network architectures to further boost performance and widen the applicability of the method to all datasets.
Read more.
9:00am–12:30pm Tuesday, April 16, 2019
Rachel Bellamy, Kush Varshney, Karthikeyan Natesan Ramamurthy, and Michael Hind explain how to use and contribute to AI Fairness 360—a comprehensive Python toolkit that provides metrics to check for unwanted bias in datasets and machine learning models and state-of-the-art algorithms to mitigate such bias.
Read more.
1:45pm–5:15pm Tuesday, April 16, 2019
Justina Petraityte offers a hands-on walk-through of developing intelligent AI assistants based entirely on machine learning and using only the open source tools Rasa NLU and Rasa Core. You'll learn the fundamentals of conversational AI and best practices for developing AI assistants that scale and learn from real conversational data.
Read more.
1:45pm–5:15pm Tuesday, April 16, 2019
Ray is a general purpose framework for programming your cluster. Robert Nishihara, Philipp Moritz, Ion Stoica, and Eric Liang lead a deep dive into Ray, walking you through its API and system architecture and sharing application examples, including several state-of-the-art AI algorithms.
Read more.
11:05am–11:45am Wednesday, April 17, 2019
Interested in deep learning models and how to deploy them on Kubernetes at production scale? Not sure if you need to use GPUs or CPUs? Mathew Salvaris and Fidan Boylu Uz help you out by providing a step-by-step guide to creating a pretrained deep learning model, packaging it in a Docker container, and deploying as a web service on a Kubernetes cluster.
Read more.
11:05am–11:45am Wednesday, April 17, 2019
Josh Gordon shares the very latest in TensorFlow, focusing on TensorFlow 2.0 and its easy-to-use eager execution. Josh also covers how to use TensorFlow's revised high-level API and details pitfalls and tricks to get better performance on accelerator hardware.
Read more.
1:00pm–1:40pm Wednesday, April 17, 2019
While building machine learning models for most large projects, data scientists typically design dozens of models using different combinations of hyperparameters, data configurations, and training settings. Catherine Ordun describes how to build your own machine learning model tracking leaderboard in Keras.
Read more.
1:00pm–1:40pm Wednesday, April 17, 2019
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. Ameet Talwalkar shares work that aims to help ground the empirical results in this field and proposes new NAS baselines.
Read more.
2:40pm–3:20pm Wednesday, April 17, 2019
Twitter is a large company with many ML use cases. Historically, there have been many ways to productionize ML at Twitter. Yi Zhuang and Nicholas Leonard describe the setup and benefits of a unified ML platform for production and explain how the Twitter Cortex team brings together users of various ML tools.
Read more.
2:40pm–3:20pm Wednesday, April 17, 2019
At the core of today's problems with image classification and deep learning lies one fundamental truth: most AI systems operate by choosing the path of least resistance, not the path of highest long-term quality. Matt Zeiler discusses Clarifai's approach to closing the loop on AI and the techniques it employs to counter the AI quality regression phenomenon.
Read more.
4:05pm–4:45pm Wednesday, April 17, 2019
Pradip Bose details a next-generation AI research project focused on creating "self-aware" AI systems that have built-in autonomic detection and mitigation facilities to avoid faulty or undesirable behavior in the field—in particular, cognitive bias and inaccurate decisions that are perceived as being unethical.
Read more.
4:55pm–5:35pm Wednesday, April 17, 2019
Games are wonderful contained problem spaces, making them great places to explore AI—even if you're not a game developer. Paris Buttfield-Addison, Mars Geldard, and Tim Nugent teach you how to use Unity to train, explore, and manipulate intelligent agents that learn. You'll train a quadruped to walk, then train it to explore, fetch, and manipulate the world.
Read more.
4:55pm–5:35pm Wednesday, April 17, 2019
The development of AI is creating new opportunities to improve the lives of all people. It's also raising new questions about ways to build fairness, interpretability, and other moral and ethical values into these systems. Using Jupyter and TensorFlow, Andrew Zaldivar shares hands-on examples that highlight current work and recommended practices toward the responsible development of AI.
Read more.
1:00pm–1:40pm Thursday, April 18, 2019
Building deep learning applications is hard. Building them repeatably is harder. Maintaining high computational performance during a repeatable deep learning development process is borderline impossible. Evan Sparks describes the key pitfalls associated with fast, repeatable model development and details what practitioners can do to avoid them and maintain a supercharged AI development workflow.
Read more.
1:00pm–1:40pm Thursday, April 18, 2019
Automated machine learning (AutoML) enables both data scientists and domain experts (with limited machine learning training) to be productive and efficient. AutoML is a fundamental shift in how organizations approach machine learning. Francesca Lazzeri and Wee Hyong Tok demonstrate how to use AutoML to automate the selection of machine learning models and automate tuning of hyperparameters.
Read more.
2:40pm–3:20pm Thursday, April 18, 2019
Magnus Hyttsten explains how to use TensorFlow effectively in a distributed manner using best practices. Magnus covers using TensorFlow's new DistributionStrategies to get easy high-performance training with Keras models (and custom models) on multi-GPU setups as well as multinode training on clusters with accelerators.
Read more.