Presented By O’Reilly and Intel Nervana
Put AI to work
September 17-18, 2017: Training
September 18-20, 2017: Tutorials & Conference
San Francisco, CA

Intel Xeon scalable processor architecture and AI workload performance

1:45pm–2:25pm Wednesday, September 20, 2017
Location: Franciscan CD
Average rating: ****.
(4.00, 1 rating)

What you'll learn

  • Explore the architectural features of the latest Intel Xeon scalable processor

Description

Banu Nagasundaram and Akhilesh Kumar offer an overview of the architectural features of the latest Intel Xeon scalable processor, outline the changes from previous generations, and discuss the architectural benefits that favor AI workloads. Along the way, Banu and Akhilesh explore AI workload performance for data center CPUs.

Photo of Banu Nagasundaram

Banu Nagasundaram

Intel

Banu Nagasundaram is a product marketing engineer with the data center group at Intel, where she supports performance marketing for Xeon Phi, Intel FPGA, and Xeon for AI. Previously, Banu was a design engineer on the exascale supercomputing research team with Intel Federal. Prior to Intel, Banu worked at Qualcomm doing design verification of mobile processors. Banu holds an MS in electrical and computer engineering from the University of Florida and is working toward an MBA at UC Berkeley’s Haas School of Business.

Photo of Akhilesh Kumar

Akhilesh Kumar

Intel

Akhilesh Kumar is a principal engineer on the data center processor architecture team at Intel, where he is currently responsible for the Skylake-SP and Cascade Lake processor architectures. In his 21 years at Intel, Akhilesh has contributed to the architecture of various server processors, chipsets, and system fabrics. He holds a PhD in computer science from Texas A&M University.