Presented By O'Reilly and Cloudera
Make Data Work
March 13–14, 2017: Training
March 14–16, 2017: Tutorials & Conference
San Jose, CA

Squeezing deep learning onto mobile phones

Anirudh Koul (Microsoft)
11:50am12:30pm Wednesday, March 15, 2017
Data science & advanced analytics
Location: 210 C/G Level: Intermediate
Secondary topics:  Deep learning, Hardcore Data Science, Mobile
Average rating: ****.
(4.20, 5 ratings)

Who is this presentation for?

  • Data scientists, mobile developers, and software architects

Prerequisite knowledge

  • A high-level understanding of deep learning

What you'll learn

  • Learn how to deploy deep learning architectures on mobile devices
  • Gain practical tips on developing apps for real-world scenarios
  • Become familiar with the ecosystem and platforms available for AI on smartphones

Description

Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in computer vision. Anirudh Koul explains how to bring the power of deep learning to memory- and power-constrained devices like smartphones and drones.

Many mobile applications running on smartphones and wearable devices would potentially benefit from the accuracy of deep learning techniques. Also, local execution allows data to stay on the mobile device, hence avoiding latency issues of data transmission to the cloud and also alleviating privacy concerns. However, CNNs, by nature, are computationally expensive and memory intensive, making them challenging to deploy on a mobile device. Anirudh explores various strategies to circumvent these obstacles and build mobile-friendly shallow CNN architectures to significantly reduce the memory footprint, making CNNs easier to store on a smartphone. By comparing a family of model compression techniques to prune the network size for live image processing, you can build a CNN version optimized for inference on mobile devices. Anirudh also covers practical strategies to preprocess your data in a manner that makes the models more efficient in the real world.

Anirudh showcases these techniques using a real-world project, as well as tips and tricks, to demonstrate how to get started developing your own deep learning application suitable for deployment on storage- and power-constrained mobile devices. Similar techniques can also be applied to make deep neural nets more efficient when deploying in a regular cloud-based production environment, reducing the number of GPUs required and optimizing on cost.

Photo of Anirudh Koul

Anirudh Koul

Microsoft

Anirudh is a noted AI expert, O’Reilly author, and a former scientist at Microsoft AI & Research, where he founded Seeing AI, the most used technology among the blind community after the iPhone. Anirudh serves as the Head of AI & Research at Aira, noted by Time Magazine as one of the best inventions of 2018. He’s also the author of the upcoming ‘Practical Deep Learning for Cloud & Mobile’. With features shipped to a billion users, he brings over a decade of production-oriented Applied Research experience on PetaByte scale datasets. He has been developing technologies using AI techniques for Augmented Reality, Robotics, Speech, Productivity as well as Accessibility. Some of his recent work, which IEEE has called ‘life-changing’, has been honored by CES, FCC, Cannes Lions, American Council of the Blind, showcased at events by UN, White House, House of Lords, World Economic Forum, Netflix, National Geographic, and applauded by world leaders including Justin Trudeau and Theresa May.