Convolution neural networks (CNNs) have been used in many image classification tasks and are usually trained on large image datasets, such as ImageNet and CIFAR. CNNs have been shown to be very effective in extracting the features of images from diverse domains.
In practice, most people do not train a CNN from scratch, due to time constraints. Transfer learning enables you to use pretrained deep neural networks (e.g., AlexNet, ResNet, and Inception V3) and adapt them for custom image classification tasks. Danielle Dean and Wee Hyong Tok walk you through the basics of transfer learning and demonstrate how you can use the technique to bootstrap the building of custom image classifiers using pretrained CNNs available in various deep learning toolkits (e.g., pretrained CNTK models, Caffe Model Zoo, and pretrained TensorFlow libraries).
Danielle Dean is the technical director of machine learning at iRobot. Previously, she was a principal data science lead at Microsoft. She holds a PhD in quantitative psychology from the University of North Carolina at Chapel Hill.
Wee Hyong Tok is a principal data science manager with the AI CTO office at Microsoft, where he leads the engineering and data science team for the AI for Earth program. Wee Hyong has worn many hats in his career, including developer, program and product manager, data scientist, researcher, and strategist, and his track record of leading successful engineering and data science teams has given him unique superpowers to be a trusted AI advisor to customers. Wee Hyong coauthored several books on artificial intelligence, including Predictive Analytics Using Azure Machine Learning and Doing Data Science with SQL Server. Wee Hyong holds a PhD in computer science from the National University of Singapore.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org