Current AI bots fail to achieve anything drastically more advanced than reducing the number of taps we make on a phone. Implementing superior conversational agents depends on engineering several attributes: the ability to comprehend topics, recollecting past discussions in context, intelligently parsing digital feeds to save time, grasping sentiment and emotions in dialogue, and even exhibiting personality. The major computational bottleneck in achieving these capabilities appears to be the machine’s inability to restrict error propagation in a handful of NLP or speech recognition tasks.
There’s an ancient saying: any problem in computer science can be solved by adding a layer of indirection. It is epitomized by the greatest invention of our times—the Internet, which is built on a layered architecture. When we chat with someone across the globe, the geographical network constraints necessitate a layered architecture for accurate message transmission. Similarly, to chat with conversational bots, cognitive constraints would require a layered architecture for accurate message comprehension. By modularizing the desired qualities in human bot interconnection, conversational agents can learn from data but keep the error propagation contained within specific modules. Since many conversational abilities have some codependence, modules with specific abilities can be stacked on top of each other in layers. Machine learning through a layered architecture can unleash the true potential of conversational bots.
Suman Roy explores the architectural layers for human bot interconnection and the protocols that govern the learning paradigms. Each layer possesses capabilities that are built on top of abilities owned by lower layers, and the ability of each layer is exposed via protocols. Several machine-learning tools and algorithms already fit into specific layers in this architecture, and humans have their roles to play as well. As we move up the layers, human involvement intensifies because current computational methods aren’t accurate enough. On the other hand, algorithms in lower layers are more robust and time-tested in production environments.
Suman Deb Roy is a computer scientist and the author of Social Multimedia Signals: A Signal Processing Approach to Social Network Phenomena. Suman currently works as the lead data scientist in NY-based startup studio betaworks. Previously, he worked with Microsoft Research and was a fellow at the Missouri School of Journalism. Suman is the recipient of the IEEE Communications Society MMTC Best Journal Paper Award in 2015 and the Missouri Honor Medal for Outstanding PhD Research in 2013. He also serves as the editor of IEEE Special Technical Community on Social Networking. Suman is responsible for building the machine-learning algorithms driving product features in Digg, Instapaper, and Poncho.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org