Digital character interaction is hard to fake, whether it’s between two characters, between users and characters, or between a character and its environment. Nevertheless, interaction is central to building immersive XR experiences, robotic simulation, and user-driven entertainment. Kevin He explains how to use physical simulation and machine learning to create interactive character technology.
You’ll explore interactive motion and the physical underpinnings needed to build reactive characters—starting from building a physics engine optimized for joint articulation that can dynamically solve for multiple connected rigid bodies—and then discover how to build a standard rag doll, ascribe the rag doll musculature and posture, and eventually to solve for basic locomotion via control of the simulated joints and muscles. Kevin covers the applications of this real-time physics simulation for characters in XR—from dynamic full-body VR avatars to interactive NPC and pets to entirely interactive AR avatars and emojis.
Kevin then explains how to apply an AI model to your physically simulated character to generate interactive motion intelligence—giving your character controllers sample motion data to learn motor skills via reverse engineering the motion in terms of simulated musculature and optimizing for additional reward functions to ensure natural, flexible motion. The character can then produce the behavior in real-time physical simulation—transforming static animation data into a flexible and interactive skill. This is the first step in building a digital cerebellum. You’ll learn how to build on this knowledge to create a “motion brain” by doing another level of training to stitch multiple behaviors into a control graph, allowing for seamless blending between learned motor skills in a variety of sequences.
Kevin concludes with a look at some advanced applications of motion intelligence and explains how to use the product you built to create an applied AI tool for developers. You’ll see how to build a basketball-playing physically simulated character (as published in a paper for SIGGRAPH 2018) using deep reinforcement learning. This character can control a 3D ball in space using advanced ball control skills.
Kevin He is the founder of DeepMotion. Kevin has spent his career pushing the boundaries of gaming and engineering. Previously, Kevin was the CTO of Disney’s midcore mobile game studio, technical director of ROBLOX, senior developer of World of Warcraft at Blizzard, and a technical lead at Airespace (now Cisco Systems). He has 16 years of engineering and management experience with multiple AAA titles shipped, including World of Warcraft, StarCraft II, Star Wars Commander, and ROBLOX.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com