Accelerating deep learning workloads in the cloud and data centers
No one has time to search, install, and set up the environment for running deep learning (DL) workloads optimally. Intel collaborated with major CSPs and OEM partners to include preconfigured optimized DL environments, allowing data scientists and deep learning practitioners to instantly have access to Intel-optimized DL environments. This will get you started to quickly set up Intel-optimized AI environments for your workloads, be it in the cloud or in your data center.
Ravi Panchumarthy walks you through launching preconfigured virtual machines with Intel-optimized deep learning frameworks in the cloud (AWS, Azure, and GCP), running TensorFlow convolutional neural network (CNN) benchmarks; and the technical details of Intel-optimized solutions to accelerate deployment to the data center via OEM partners (Dell, Lenovo, HPE, and Inspur).
Ravi Panchumarthy is a machine learning engineer in the Artificial Intelligence Products Group (AIPG) at Intel. He collaborates with Intel’s customers and partners to build and optimize AI solutions. He also works with cloud service providers to enable Intel’s AI optimizations in cloud instances and services. He has a PhD in computer science and engineering from University of South Florida with a dissertation focused on developing novel nonboolean computing techniques for computer vision applications using nanomagnetic field-based computing. He holds two patents and several peer-reviewed publications in journals and conferences.
Diversity and Inclusion Sponsor
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires