Presented By O’Reilly and Cloudera
Make Data Work
21–22 May 2018: Training
22–24 May 2018: Tutorials & Conference
London, UK

Natural language understanding at scale with spaCy and Spark NLP

David Talby (Pacific AI), Claudiu Branzan (Accenture)
13:3017:00 Tuesday, 22 May 2018
Data science and machine learning
Location: Capital Suite 13 Level: Intermediate
Secondary topics:  Text and Language processing and analysis
Average rating: ****.
(4.33, 3 ratings)

Who is this presentation for?

  • Data scientists, machine learning engineers, architects, and engineering managers

Prerequisite knowledge

  • A working knowledge of Python, Spark, and machine learning

Materials or downloads needed in advance

Download and install Docker (Community Edition / Stable channel) following the instructions on:

After Docker is running on your machine, run the following command to get the Docker container for this tutorial on your machine:

  • docker pull melcutz/nlu-demo

Because of all the dependencies (Spark, SpaCy,NLTK,UMLS,etc.) the image file is very big so it might take a while to download. Once the command finishes successfully and you have the image on your machine (use ‘docker images’ to validate), use the following command to start the Docker container:

  • docker run –it --rm –p 8888:8888 melcutz/nlu-demo

If successfully launched, the output should be something like:

  • Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://localhost:8888/?token=a8309a652c58fe0172483ef845461af030349e04cb0ac88e

So, follow the instruction and copy/paste the provided URL into your browser of choice (we tested on Chrome) and you should be able to navigate to an instance of Jupyter Notebook running inside the Docker container.

What you'll learn

  • Gain hands-on experience with common NLP tasks and pipelines using spaCy and Spark NLP

Description

Natural language processing is a key component in many data science systems that must understand or reason about text. Common use cases include question answering, paraphrasing or summarization, sentiment analysis, natural language BI, language modeling, and disambiguation. Building such systems usually requires combining three types of software libraries: NLP annotation frameworks, machine learning frameworks, and deep learning frameworks.

David Talby and Claudiu Branzan lead a hands-on tutorial for scalable NLP using spaCy for building annotation pipelines, Spark NLP for training distributed custom natural language machine-learned pipelines, and Spark ML and TensorFlow for using deep learning to build and apply word embeddings. You’ll spend about half your time coding as you work through three sections, each with an end-to-end working codebase that you are then asked to change and improve.

Outline

Using spaCy to build an NLP annotations pipeline that can understand text structure, grammar, and sentiment and perform entity recognition

  • Built-in spaCy annotators
  • Debugging and visualizing results
  • Creating custom pipelines
  • Practical trade-offs for large-scale projects, as well as for balancing performance versus accuracy

Using TensorFlow to build domain-specific machine-learned annotators and then integrating them into an existing NLP pipeline

  • Feature engineering and optimization
  • Measurement
  • Practical considerations when working on problems that require understanding text beyond keyword matching and one-hot encoding

Using Spark ML and TensorFlow to apply deep learning to expand and update ontologies

  • Comparison of word2vec and doc2vec
  • When each is useful
  • How to apply them to increase the accuracy of classification or information retrieval problems
  • Current trade-offs in integrating spaCy and Spark when engineering distributed, large-scale NLP pipelines
Photo of David Talby

David Talby

Pacific AI

David Talby is a chief technology officer at Pacific AI, helping fast-growing companies apply big data and data science techniques to solve real-world problems in healthcare, life science, and related fields. David has extensive experience in building and operating web-scale data science and business platforms, as well as building world-class, agile, distributed teams. Previously, he led business operations for Bing Shopping in the US and Europe with Microsoft’s Bing Group and built and ran distributed teams that helped scale Amazon’s financial systems with Amazon in both Seattle and the UK. David holds a PhD in computer science and master’s degrees in both computer science and business administration.

Photo of Claudiu Branzan

Claudiu Branzan

Accenture

Claudiu Branzan is an analytics senior manager in the Applied Intelligence Group at Accenture, based in Seattle, where he leverages his more than 10 years of expertise in data science, machine learning, and AI to promote the use and benefits of these technologies to build smarter solutions to complex problems. Previously, Claudiu held highly technical client-facing leadership roles in companies using big data and advanced analytics to offer solutions for clients in healthcare, high-tech, telecom, and payments verticals.

Comments on this page are now closed.

Comments

Picture of Claudiu Branzan
Claudiu Branzan | ANALYTICS SENIOR MANAGER
22/05/2018 1:07 BST

Thanks for the info Sertan! We specifically put — as we had that issue before :)
Also, please notice the token is dynamically generated so it will differ from the one in the example above. Please use the link generated after executing the ‘docker run’ command. We will be in the room a few minutes earlier to help anyone setting up their environment. Even if you don’t get to install anything you can still follow us and it should be fun ;)

Picture of Sertan Şentürk
Sertan Şentürk | DATA SCIENTIST
22/05/2018 1:03 BST

P.S: The dashes in the command I posted before is also wrong due to the website’s automatic formatting. You should simply enter the command by hand :)

Picture of Sertan Şentürk
Sertan Şentürk | DATA SCIENTIST
22/05/2018 1:00 BST

If you copy and paste the docker run command you might get a “docker: invalid reference format.” error. This is because some of the dash characters are actually en-dash. Below is I paste the command with correct characters:

docker run -it —rm -p 8888:8888 melcutz/nlu-demo