Multiscreen experiences are powerful. Starting a task on one device and finishing on another or having devices work together to create a unique experience are just some of the ways we divide our time between different devices based on context. (Apple’s Handoff and Amazon’s Whispersync are two notable examples.)
Conversational assistants, such as Siri, Alexa, and Cortana, have evolved over the past few years. Their ability to have natural conversations with users and provide real value is increasing. But what happens when one of these conversational agents exists on your phone, in your watch, in your car, and in your living room? When you say “Hi Siri,” do all your devices answer?
Karen Kaushansky reviews the different approaches to designing conversational agents and explains why design matters. After looking at the shortcomings of the experience with today’s assistants, Karen explores new paradigms for using speech recognition across devices and starts to define multidevice experiences with conversational assistants.
Karen Kaushansky is director of experience for Zoox, helping build an autonomous vehicle for the future. Named one of the “75 Best Designers in Technology” from Business Insider, Karen creates meaningful and connected experiences in the physical world spanning hardware and software. She worked for Jawbone, designing interactive and audio experiences on devices including Big Jambox, Mini Jambox Icon, and UP, and has worked on other smart connected products, such as the Cinder Sensing Cooker and Sensilk smart clothing. Karen worked for many years as a voice user-interface designer for Tellme/Microsoft, designing complex speech recognition systems, such as Ford Sync and Tellme for Windows Mobile.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com