Since their introduction in the early 2010s, word vector embedding models have exploded in popularity and use. They are one of the key breakthroughs that have enabled a new, state-of-the-art approach to natural language processing based on deep learning. But despite their impact, relatively few practitioners understand how word vector models work under the hood to capture the semantic relationships within natural language data and produce their remarkable results.
Patrick Harrison opens up the black box of a popular word embedding algorithm and walks you through how it works its magic. Patrick also covers core neural network concepts, including hidden layers, loss gradients, backpropagation, and more.
This talk is based on an excerpt from the forthcoming book Deep Learning with Text from O’Reilly Media.
Patrick Harrison started and leads the data science team at S&P Global Market Intelligence (S&P MI), a business and financial intelligence firm and data provider. The team employs a wide variety of data science tools and techniques, including machine learning, natural language processing, recommender systems, graph analytics, among others. Patrick is the coauthor of the forthcoming book Deep Learning with Text from O’Reilly Media, along with Matthew Honnibal, creator of spaCy, the industrial-strength natural language processing software library, and is a founding organizer of a machine learning conference in Charlottesville, Virginia. He is actively involved in building both regional and global data science communities. Patrick holds a BA in economics and an MS in systems engineering, both from the University of Virginia. His graduate research focused on complex systems and agent-based modeling.
Comments on this page are now closed.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com