Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Fast (and cheap) AI accelerated on FPGAs

Ted Way (Microsoft), Maharshi Patel (Microsoft), Aishani Bhalla (Microsoft)
4:55pm5:35pm Wednesday, April 17, 2019
Implementing AI
Location: Trianon Ballroom
Secondary topics:  Computer Vision, Edge computing and Hardware, Platforms and infrastructure

Who is this presentation for?

  • Data scientists and IT pros who want to accelerate their DNN processing speed

Level

Intermediate

Prerequisite knowledge

  • Familiarity with DNNs and the problems they can solve, such as ResNet, Inception, VGG, and DenseNet for image classification and models such as SSD-VGG for bounding boxes
  • A basic understanding of Python and TensorFlow (useful but not required)

What you'll learn

  • Understand how Project Brainwave accelerates AI using FPGAs
  • Learn how to use Python, TensorFlow, Azure ML, and Project Brainwave to deploy a DNN onto an FPGA

Description

There are many challenges to serving DNNs at scale today. As algorithms become more complex, requiring more processing power, you have to make trade-offs to serve accurate results both quickly and cost-effectively.

Join Ted Way, Maharshi Patel, and Aishani Bhalla to learn how to use Python and TensorFlow to train and deploy computer vision models on Intel FPGAs with Azure Machine Learning and Project Brainwave, getting performance such as ResNet 50 in under 2 ms. Project Brainwave, a DNN-serving platform powered by Intel field-programmable gate array (FPGA) chips, was motivated by the recurrent neural networks (RNNs) required by Microsoft’s Bing team to process text and return query results. Ted, Maharshi, and Aishani highlight the innovations in Project Brainwave, from a new data type to utilizing spatial compute so there’s no need for a large batch size to get high throughput and fast results, and explain how Intel FPGAs helped the team adapt to the fast-changing AI landscape by enabling them to reconfigure the chips with software, instead of having to fab chips and deploy them to data centers every time there is a refresh.

Ted, Maharshi, and Aishani then discuss Azure Machine Learning, an end-to-end machine learning platform accessible as a set of services by a Python SDK or command-line interface. Using familiar languages such as Python and frameworks such as TensorFlow, they detail how to train a computer vision model and deploy it to an FPGA directly from a Jupyter notebook—no knowledge of Verilog or VHDL required. Discover how to achieve performance for models such as ResNet 50 in under 2 ms, five times faster than published benchmarks. Ted, Maharshi, and Aishani conclude with a demo of running an image classification model through a Jupyter notebook and deploying it to run on an FPGA running in the Azure cloud or on an Azure IoT Edge device.

Photo of Ted Way

Ted Way

Microsoft

Ted Way is a senior program manager on the Azure Machine Learning engineering team at Microsoft, where he works on bringing machine learning to the edge and hardware acceleration of AI. He’s passionate about telling the story of how AI will empower people and organizations to achieve more and has been invited as a keynote speaker for two Microsoft partner conferences. He has twice received the Microsoft Executive Briefing Center’s Distinguished Speaker Award, awarded to only five out of over 1,000 speakers. He holds BS degrees in electrical engineering and computer engineering, MS degrees in electrical engineering and biomedical engineering, and a PhD in biomedical engineering from the University of Michigan—Ann Arbor. His PhD dissertation was on “spell check for radiologists,” a computer-aided diagnosis (CAD) system that uses image processing and machine learning to predict lung cancer malignancy on chest CT scans.

Photo of Maharshi Patel

Maharshi Patel

Microsoft

Maharshi Patel is a Software Engineer at Microsoft working on FPGA accelerated Azure Machine Learning Platform for Cloud and Edge.

Photo of Aishani Bhalla

Aishani Bhalla

Microsoft

Aishani Bhalla is a software engineer on the Azure Machine Learning team at Microsoft, where she’s helping people operationalize models to build intelligence into every application, accelerated on FPGAs with Project Brainwave. She holds a BS in computer science from the University at Buffalo.