Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Using AI to create interactive digital actors

Kevin He (DeepMotion)
4:05pm4:45pm Wednesday, April 17, 2019
Interacting with AI
Location: Regent Parlor
Secondary topics:  Media, Marketing, Advertising, Models and Methods, Reinforcement Learning
Average rating: ****.
(4.00, 2 ratings)

Who is this presentation for?

  • CTOs, engineers, and creative producers

Level

Beginner

Prerequisite knowledge

  • Familiarity with VFX and gaming development (useful but not required)

What you'll learn

  • Understand the potential for AI to enable new forms of entertainment

Description

Digital character interaction is hard to fake, whether it’s between two characters, between users and characters, or between a character and its environment. Nevertheless, interaction is central to building immersive XR experiences, robotic simulation, and user-driven entertainment. Kevin He explains how to use physical simulation and machine learning to create interactive character technology.

You’ll explore interactive motion and the physical underpinnings needed to build reactive characters—starting from building a physics engine optimized for joint articulation that can dynamically solve for multiple connected rigid bodies—and then discover how to build a standard rag doll, ascribe the rag doll musculature and posture, and eventually to solve for basic locomotion via control of the simulated joints and muscles. Kevin covers the applications of this real-time physics simulation for characters in XR—from dynamic full-body VR avatars to interactive NPC and pets to entirely interactive AR avatars and emojis.

Kevin then explains how to apply an AI model to your physically simulated character to generate interactive motion intelligence—giving your character controllers sample motion data to learn motor skills via reverse engineering the motion in terms of simulated musculature and optimizing for additional reward functions to ensure natural, flexible motion. The character can then produce the behavior in real-time physical simulation—transforming static animation data into a flexible and interactive skill. This is the first step in building a digital cerebellum. You’ll learn how to build on this knowledge to create a “motion brain” by doing another level of training to stitch multiple behaviors into a control graph, allowing for seamless blending between learned motor skills in a variety of sequences.

Kevin concludes with a look at some advanced applications of motion intelligence and explains how to use the product you built to create an applied AI tool for developers. You’ll see how to build a basketball-playing physically simulated character (as published in a paper for SIGGRAPH 2018) using deep reinforcement learning. This character can control a 3D ball in space using advanced ball control skills.

Photo of Kevin He

Kevin He

DeepMotion

Kevin He is the founder of DeepMotion. Kevin has spent his career pushing the boundaries of gaming and engineering. Previously, Kevin was the CTO of Disney’s midcore mobile game studio, technical director of ROBLOX, senior developer of World of Warcraft at Blizzard, and a technical lead at Airespace (now Cisco Systems). He has 16 years of engineering and management experience with multiple AAA titles shipped, including World of Warcraft, StarCraft II, Star Wars Commander, and ROBLOX.