Creating autonomy for social robots
Who is this presentation for?
- Data practitioners, engineers, researchers, and roboticists
The technologies enabling social robots are becoming more capable and affordable. Yet creating autonomy for human-robot interaction is still a nontrivial task, requiring the integration of essential AI building blocks, including natural language, vision, navigation, and planning and reasoning. In particular, it’s often difficult to quantify human social behavior and rules with algorithms in robots.
Research has centered around deploying social robots to interact with real people in public spaces such as train stations and shopping malls. Building on over 20 years of combined experience with robots ranging from robotic wheelchairs to highly realistic humanlike androids, Dylan Glas and Phoebe Liu explore some of their work using AI and machine learning to understand social situations and generate robot behavior and using learning by imitation techniques to reproduce social behavior in an autonomous robot, including ways they have adapted robots to the challenges of real-world environments outside the lab, such as using sensor data to understand the social use of space and predict the movement of pedestrians. You’ll see the algorithms and architectures Dylan and Phoebe have developed to enable social robots to approach people, hold conversational interactions, and provide guidance and services to people in shopping malls, elementary schools, and beyond.
They also detail a data-driven approach for generating speech and motion behavior in an autonomous robot, using learning-by-imitation from unscripted human-human interaction data collected from sensors. This technique requires no manual design, human annotation, modeling, or natural-language understanding, and it’s able to autonomously generate socially appropriate locomotion and speech behaviors for a humanoid robot. You’ll leave with some key insights into how learning from human behavior is critical for developing autonomous robots for the real world.
- A basic knowledge of machine learning
What you'll learn
- See practical approaches to enabling human-robot interaction in the real world, including dealing with noisy real-world multimodal data, experiences from field trials, and understanding what it takes to build a humanoid robot
Dylan Glas is a roboticist and researcher with over a decade of experience in the field of social human-robot interaction. He’s a senior robotics software architect at Futurewei Technologies. Previously, he was a guest associate professor at Osaka University and a group leader and senior researcher at the Advanced Telecommunications Research Institute (ATR) in Kyoto, Japan, where he developed frameworks and algorithms for multimodal perception, machine learning, and autonomous behavior generation for a variety of humanoid social robots. He’s been featured on international media, including CBS, BBC, CNN, National Geographic, and the Guardian, for his work as the lead architect of ERICA, a highly humanlike conversational android that is currently operating as a TV news anchor in Japan.
Phoebe Liu is a machine learning scientist at Figure Eight, an AI and machine learning startup based in San Francisco. Previously, she was a robotics researcher in Japan, working in Hiroshi Ishiguro Laboratory at Advanced Telecommunications Research Institute International (ATR). At the same time, she earned her PhD at Osaka University. She was involved in projects including enabling conversational social robot to imitate human behaviors, android science, and teleoperation system for semiautonomous robots.
Comments on this page are now closed.
Diversity and Inclusion Sponsor
Premier Exhibitor Plus
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires