Jonathan Mugan surveys two paths in natural language processing to move from meaningless tokens to artificial intelligence.
The first path is the symbolic path. Jonathan explores the bag-of-words and tf-idf models for document representation and discusses topic modeling with latent Dirichlet allocation (LDA). Jonathan then covers sentiment analysis, representations such as WordNet, FrameNet, ConceptNet, and the importance of causal models for language understanding.
The second path is the subsymbolic path—the neural networks (deep learning) that you’ve heard so much about. Jonathan begins with word vectors, explaining how they are used in sequence-to-sequence models for machine translation, before demonstrating how machine translation lays the foundation for general question answering. Jonathan concludes with a discussion of how to build deeper understanding into your artificial systems.
Jonathan Mugan is CEO of DeepGrammar. Jonathan specializes in artificial intelligence and machine learning, and his current research focuses on deep learning, where he seeks to allow computers to acquire abstract representations that enable them to capture subtleties of meaning. Jonathan holds a PhD in computer science from the University of Texas at Austin. His thesis work concerned developmental robotics and focused on the problem of how to build robots that can learn about the world in the same way that children do.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com