While large strides have been made in the development of high-performance systems for neural networks based on multicore technology, significant challenges in power, cost, and performance scaling remain. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks because they can combine computing, logic, and memory resources in a single device. FPGAs provide deterministic low latency and highly efficient implementations with various levels of precision due to their customizable architecture. Bill Jenkins shares Intel’s deep learning accelerator library, which offers a variety of primitives and architectures highly optimized for FPGAs and allows seamless integration into the Intel ecosystem.
Bill Jenkins is a senior product line specialist at Intel, where he is involved in marketing, planning, and strategy. Previously, he was an application engineer at Intel and held a variety of roles at government and defense research and development companies, specializing in signal and image processing using CPUs, GPUs, and FPGAs. Bill holds a master’s degree in electrical engineering and an MBA from the University of Massachusetts Lowell, where he focused on computer engineering and signal processing.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com