Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in the area of computer vision. Anirudh Koul explains how to bring the power of convolutional neural networks and deep learning to memory- and power-constrained devices like smartphones, wearable devices, and drones.
Many mobile applications running on smartphones and wearable devices would potentially benefit from the new opportunities enabled by deep learning techniques. Local execution allows data to stay on the mobile device, avoiding latency issues entailed by data transmission to the cloud and alleviating privacy concerns. However, CNNs are by nature computationally and memory intensive, making them challenging to deploy on a mobile device. Anirudh shares various strategies to circumvent these obstacles and build mobile-friendly shallow CNN architectures that significantly reduce the memory footprint and therefore make them easier to store on a smartphone; he also explains how to use a family of model compression techniques to prune the network size for live image processing, enabling you to build a CNN version optimized for inference on mobile devices. Along the way, he outlines practical strategies to preprocess your data in a manner that makes the models more efficient in the real world.
Anirudh showcases these techniques using a real-world project and discusses tips and tricks, speed and accuracy trade-offs, and benchmarks on different hardware to demonstrate how to get started developing your own deep learning application suitable for deployment on storage- and power-constrained mobile devices. You can also apply similar techniques to make deep neural nets more efficient when deploying in a regular cloud-based production environment, thus reducing the number of GPUs required and optimizing on cost.
Anirudh Koul is a head of AI and research at Aira, noted by Time magazine as one of the best inventions of 2018. He’s a noted AI expert and O’Reilly author, including the upcoming Practical Deep Learning for Cloud and Mobile. Previously, he was a scientist at Microsoft AI, where he founded Seeing AI, the most-used technology among the blind community after the iPhone. With features shipped to a billion users, he brings over a decade of production-oriented applied research experience on petabyte-scale datasets. He’s been developing technologies using AI techniques for augmented reality, robotics, speech, productivity, and accessibility. Some of his recent work, which IEEE has called “life-changing,” has been honored by CES, FCC, Cannes Lions, American Council of the Blind, showcased at events by the UN, the White House, the House of Lords, the World Economic Forum, Netflix, National Geographic, and applauded by world leaders including Justin Trudeau and Theresa May.
For exhibition and sponsorship opportunities, email aisponsorships@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of AI contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com