Reading comprehension has been used as a benchmark for evaluating natural language processing systems for decades. Typically, a system is given a story and a set of questions related to the story, and the system’s ability to correctly answer those questions is used to determine how well the system understands the story, much the same way school children are assessed. Reading comprehension systems have traditionally employed a mix of rule-based and statistical approaches and have more recently begun using trained end-to-end neural models with the objective of maximizing answer accuracy. For the most part, the “how” and “why” of getting the correct answer has not been a focus of system development or of system or user interaction.
Jennifer Chu-Carroll starts with a brief overview of the state of the art in the reading comprehension landscape, demonstrating that even though some systems are capable of achieving reasonably high accuracy on benchmark datasets, the scope of these datasets is nonetheless quite limited and a system’s high performance does not necessarily indicate a level of understanding that translates to answering related questions that are natural to human readers. Jennifer explores a reading comprehension dataset developed at Elemental Cognition that focuses on assessing cognitive abilities of the reader and produces not only the correct answer but also a human-consumable explanation for the answer. Jennifer concludes by explaining how Elemental Cognition leverages this dataset to drive the development of its “natural learning” reading comprehension system, which dialogues with users to learn and compound its understanding of texts and of the world.
Jennifer Chu-Carroll is a research scientist at Elemental Cognition, where she focuses on natural language semantics and dialogue management. Previously, Jennifer was a research staff member and manager at the IBM T.J. Watson Research Center, where her most notable accomplishment was serving as a key technical lead on the Watson project, in which a high-performing question-answering system defeated the two best human players at the game of Jeopardy!, and a member of the technical staff at Lucent Technologies Bell Laboratories focusing on spoken dialogue management. Throughout her career, Jennifer has maintained a strong focus on research and development in natural language processing and related areas. She has published extensively in top conferences and journals and is very engaged in her research community. Jennifer served as general chair of NAACL-HLT 2012, program committee cochair of NAACL-HLT 2006, as area chairs and program committees of many key conferences, and on the editorial boards of multiple journals. She holds a PhD in computer science from the University of Delaware.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org