Presented By O’Reilly and Intel Nervana
Put AI to work
September 17-18, 2017: Training
September 18-20, 2017: Tutorials & Conference
San Francisco, CA

Accelerating deep learning

Bill Jenkins (Intel)
1:45pm–2:25pm Tuesday, September 19, 2017
Location: Franciscan CD

What you'll learn

  • Understand the benefits of Intel's deep learning accelerator library

Description

While large strides have been made in the development of high-performance systems for neural networks based on multicore technology, significant challenges in power, cost, and performance scaling remain. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks because they can combine computing, logic, and memory resources in a single device. FPGAs provide deterministic low latency and highly efficient implementations with various levels of precision due to their customizable architecture.​ Bill Jenkins shares Intel’s deep learning accelerator library, which offers a variety of primitives and architectures highly optimized for FPGAs and allows seamless integration into the Intel ecosystem.

Photo of Bill  Jenkins

Bill Jenkins

Intel

Bill Jenkins is a senior product line specialist at Intel, where he is involved in marketing, planning, and strategy. Previously, he was an application engineer at Intel and held a variety of roles at government and defense research and development companies, specializing in signal and image processing using CPUs, GPUs, and FPGAs. Bill holds a master’s degree in electrical engineering and an MBA from the University of Massachusetts Lowell, where he focused on computer engineering and signal processing.