Text analytics 101: Deep learning and attention networks all the way to production
Who is this presentation for?
- Data scientists, data engineers, data architects, CxOs, and software engineers
According to industry estimates, more than 80% of the data being generated is in an unstructured format, maybe in the form of text, an image, audio, or video. Data is generated as we speak, write, tweet, use social media, send messages, use ecommerce, or perform various other activities. Textual data is the most common, accounting for more than 50% of existing data. A lot of insights can be mined from this huge repository of unstructured datasets, but it requires a sophisticated approach.
In order to produce significant and actionable insights from text data, it’s necessary to make use of natural language processing (NLP) coupled with machine learning, deep learning, and state-of-the-art techniques in this space. With the latest developments and improvements in the field of deep learning and artificial intelligence, many demanding natural language processing tasks become easy to implement and execute. Text generation is one of the tasks that can be built using deep learning models, especially recurrent neural networks and its variant, long short-term memories (LSTMs).
Text generation is a language modeling problem. Language modeling is at the heart of many natural language processing tasks, such as speech synthesis, session systems, and text synthesis. A well-trained language model learns the probability of a word appearing based on a sequence of previous words used in the text. Language models can be used at the level of characters, n-grams, sentences, and even paragraphs.
Vijay Srinivas Agneeswaran, Pramod Singh, and Akshay Kulkarni explore how to create a language model that generates natural language text by implementing and forming a recurrent neural network and attention networks built on top of TensorFlow 2.0. They also examine how to efficiently build and use NLP-based applications for text summarization using deep learning networks on TensorFlow 2.0. Text summarization requires a great deal of abstraction, so they use sequence-to-sequence models and bidirectional encoder and decoders.
Not only do you get to see the notebooks for the problems outlined above but also how some of text analytics can be implemented on top of Kubeflow, which helps build scalable productionizable implementations.
- A basic understanding of deep learning
What you'll learn
- Get an introduction to NLP and different components, such as summarization and generation, and NLP using deep learning, such as why deep learning-based frameworks are required for NLP tasks
- An understanding of TensorFlow 2.0, state-of-the-art recurrent neuron network, LSTMs, and attention networks for NLP tasks
- Learn about TensorFlow 2.0 notebooks plus end-to-end text analytics code using Kubeflow
Pramod Singh is a manager for data science at Publicis Sapient and a track lead for a machine learning platform project with Mercedes Benz. He has extensive hands-on experience in machine learning, deep learning, AI, data engineering, programming, and designing algorithms for various business requirements in domains such as retail, telecom, automotive, and consumer goods and has spent the last eight years working on data projects at product and service-based organizations. He’s the author of Machine Learning with PySpark and is also a regular speaker at major conferences and universities. He’s currently writing a couple books on deep learning and AI techniques for O’Reilly and Apress. Pramod holds a bachelor’s degree in electrical and electronics engineering from Mumbai University, an MBA focused on operations and finance from Symbiosis International University, and a data analytics certification from IIM–Calcutta. He lives in Bangalore with his wife and two-year-old son. In his spare time, he enjoys playing guitar, coding, reading, and watching football.
Akshay Kulkarni is a senior data scientist with SapientRazorfish’s core AI and data science team, where he’s part of strategy and transformation interventions through AI, manages high priority growth initiatives around data science and works on various machine learning, deep learning, natural language processing, and artificial intelligence engagements by applying state-of-the-art techniques, as well as a renowned AI and machine learning evangelist, an author, and a speaker. He was recently recognized as one of the “top 40 under 40 data scientists” in India by Analytics India Magazine. He’s consulted with several Fortune 500 and global enterprises in driving AI and data science-led strategic transformations. Akshay has a rich experience of building and scaling AI and machine learning businesses and creating significant client impact. He’s actively involved in next gen AI research and is also a part of next gen AI community. Previously, he was part of Gartner and Accenture, where he scaled the AI and data science business. He’s a regular speaker at major data science conferences recently gave a talk on “Sequence Embeddings for Prediction Using Deep Learning” at GIDS. He’s the author of a book on NLP with Apress and currently authoring couple more books with Packt on deep learning and next gen NLP. He is also a visiting faculty (industry expert) at few of the top universities in India. In his spare time, he likes to read, write, code, and help aspiring data scientists.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires