In the new era of artificial intelligence, every organization must examine how to extract intelligence from its data using deep learning. While much of the focus has been on training neural networks that are smarter and more accurate, deploying neural network models in the data center to deliver responsive experiences to end users is just as important for many applications. Sanford Russell explores how NVIDIA GPUs are deployed today to accelerate deep learning inference workloads in the data center.
This session is sponsored by NVIDIA.
Sanford Russell is in charge of NVIDIA’s autonomous driving ecosystem in North America, where he leads the development of self-driving vehicles with NVIDIA partners, transportation startups, and research institutions. Previously, Sanford served as general manager of NVIDIA’s CUDA-accelerated software platform. Before joining NVIDIA 17 years ago, he worked at Silicon Graphics. Sanford has a degree in marketing from the University of Massachusetts, Dartmouth.
Comments on this page are now closed.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com