Put AI to work
June 26-27, 2017: Training
June 27-29, 2017: Tutorials & Conference
New York, NY

Adding meaning to natural language processing

Jonathan Mugan (DeepGrammar)
4:50pm5:30pm Thursday, June 29, 2017
Implementing AI
Location: Sutton South/Regent Parlor Level: Intermediate
Secondary topics:  Machine Learning, Natural Language
Average rating: *****
(5.00, 1 rating)

What you'll learn

  • Understand the current state of the art in natural language processing
  • Learn what is possible and what isn't with current and near-future NLP

Description

Jonathan Mugan surveys two paths in natural language processing to move from meaningless tokens to artificial intelligence.

The first path is the symbolic path. Jonathan explores the bag-of-words and tf-idf models for document representation and discusses topic modeling with latent Dirichlet allocation (LDA). Jonathan then covers sentiment analysis, representations such as WordNet, FrameNet, ConceptNet, and the importance of causal models for language understanding.

The second path is the subsymbolic path—the neural networks (deep learning) that you’ve heard so much about. Jonathan begins with word vectors, explaining how they are used in sequence-to-sequence models for machine translation, before demonstrating how machine translation lays the foundation for general question answering. Jonathan concludes with a discussion of how to build deeper understanding into your artificial systems.

Photo of Jonathan Mugan

Jonathan Mugan

DeepGrammar

Jonathan Mugan is CEO of DeepGrammar. Jonathan specializes in artificial intelligence and machine learning, and his current research focuses on deep learning, where he seeks to allow computers to acquire abstract representations that enable them to capture subtleties of meaning. Jonathan holds a PhD in computer science from the University of Texas at Austin. His thesis work concerned developmental robotics and focused on the problem of how to build robots that can learn about the world in the same way that children do.