With more than 15 billion shipments a year, microcontrollers (MCUs) are everywhere. MCUs are low cost and energy efficient and are often found at the heart of IoT devices. In many applications, due to bandwidth constrains, decisions and features need to be computed at the edge.
Neil Tan offers an overview of uTensor, the first framework to streamline model deployments on MCUs, allowing you to push AI to the edge rather than sending everything to the cloud. Enabling AI at the edge improves the quantity and quality of the data collected and can help extend many functionalities of existing devices. uTensor is an open source project intended to open dialogues and facilitate collaborations among data scientists and software engineers to build the next generation of AI products.
Of course, trade-offs need to be made when designing an extremely resource-constrained inference engine. Neil details some of these design decisions and explains how he has successfully deployed a three-layer MLP MNIST model on MCUs with as little as 256 KB of RAM. Convolutional and recurrent neural networks are in the works. Neil also discusses uTensor’s recent CMSIS-NN integration, which makes it faster and more energy efficient.
Join in to learn how to use and extend uTensor in your own projects.
Neil Tan is an ARM developer evangelist with a keen interest on IoT and machine learning. He works closely with open source developer communities in Asia. Neil is the main author of uTensor. He is a speaker at events such as FOSDEM and has served as a judge for a design contest.
Comments on this page are now closed.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com