Transfer learning allows data scientists to leverage insights from large labeled datasets. The general idea of transfer learning is to use knowledge learned from tasks for which a lot of labeled data is available in settings where little labeled data is available. Transfer learning is not a machine learning model or technique; it is rather a design methodology within machine learning.
Lars Hulstaert explains what transfer learning is and how it can boost your NLP or CV pipelines. Lars offers an overview of transfer learning and explores the different levels of transfer learning, from multitask learning to pure feature-based learning. He then covers applications for NLP and CV. In the first, word embeddings are applied in a text classification problem. In the second, pretrained networks are applied in an object detection problem. Lars concludes by sharing a set of guidelines for enabling transfer learning in your problem domain.
Lars Hulstaert is a data scientist at Microsoft. Previously, he studied machine learning at Cambridge University and Ghent University.
For exhibition and sponsorship opportunities, email strataconf@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of Strata Data Conference contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com