Download and install Docker (Community Edition / Stable channel) following the instructions on:
After Docker is running on your machine, run the following command to get the Docker container for this tutorial on your machine:
Because of all the dependencies (Spark, SpaCy,NLTK,UMLS,etc.) the image file is very big so it might take a while to download. Once the command finishes successfully and you have the image on your machine (use ‘docker images’ to validate), use the following command to start the Docker container:
If successfully launched, the output should be something like:
So, follow the instruction and copy/paste the provided URL into your browser of choice (we tested on Chrome) and you should be able to navigate to an instance of Jupyter Notebook running inside the Docker container.
Natural language processing is a key component in many data science systems that must understand or reason about text. Common use cases include question answering, paraphrasing or summarization, sentiment analysis, natural language BI, language modeling, and disambiguation. Building such systems usually requires combining three types of software libraries: NLP annotation frameworks, machine learning frameworks, and deep learning frameworks.
David Talby and Claudiu Branzan lead a hands-on tutorial for scalable NLP using spaCy for building annotation pipelines, Spark NLP for training distributed custom natural language machine-learned pipelines, and Spark ML and TensorFlow for using deep learning to build and apply word embeddings. You’ll spend about half your time coding as you work through three sections, each with an end-to-end working codebase that you are then asked to change and improve.
Using spaCy to build an NLP annotations pipeline that can understand text structure, grammar, and sentiment and perform entity recognition
Using TensorFlow to build domain-specific machine-learned annotators and then integrating them into an existing NLP pipeline
Using Spark ML and TensorFlow to apply deep learning to expand and update ontologies
David Talby is a chief technology officer at Pacific AI, helping fast-growing companies apply big data and data science techniques to solve real-world problems in healthcare, life science, and related fields. David has extensive experience in building and operating web-scale data science and business platforms, as well as building world-class, agile, distributed teams. Previously, he led business operations for Bing Shopping in the US and Europe with Microsoft’s Bing Group and built and ran distributed teams that helped scale Amazon’s financial systems with Amazon in both Seattle and the UK. David holds a PhD in computer science and master’s degrees in both computer science and business administration.
Claudiu Branzan is a analytics senior manager in the Applied Intelligence Group at Accenture, based in Seattle, where he leverages his more than 10 years of expertise in data science, machine learning, and AI to promote the use and benefits of these technologies to build smarter solutions to complex problems. Previously, Claudiu held highly technical client-facing leadership roles in companies utilizing big data and advanced analytics to offer solutions for clients in healthcare, high-tech, telecom, and payments verticals.
Comments on this page are now closed.
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com