Presented By
O’Reilly + Intel AI
Put AI to Work
April 15-18, 2019
New York, NY
Discover opportunities for applied AI
Organizations that successfully apply AI innovate and compete more effectively. How is AI transforming your business?
Be a part of the program—apply to speak by October 16.

Using Deep Learning to Create Interactive Digital Actors

Kevin He (DEEPMOTION, INC.)
4:05pm4:45pm Wednesday, April 17, 2019
Interacting with AI
Location: Regent Parlor
Secondary topics:  Media, Marketing, Advertising, Models and Methods, Reinforcement Learning

Who is this presentation for?

CTOs, Engineers, Creative Producers

Level

Beginner

Prerequisite knowledge

While the talk covers highly technical work, the presentation is itself beginner friendly. Some knowledge of VFX or gaming development may be useful.

What you'll learn

The potential for AI to enable new forms of entertainment.

Description

Digital character interaction is hard to fake–whether it’s between two characters, between users and characters, or between a character and its environment. Nevertheless, interaction is central to building immersive XR experiences, robotic simulation, and user-driven entertainment. Kevin He will discuss using physical simulation and deep learning to create interactive character technology.

The talk will cover “Interactive Motion”, and the physical underpinnings needed to build reactive characters—starting from building a physics engine optimized for joint articulation that can dynamically solve for multiple connected rigid bodies. We’ll cover how we used this engine to build a standard ragdoll, ascribe the ragdoll musculature and posture, and eventually to solve for basic locomotion via control of the simulated joints and muscles.

The talk will cover the applications of this real-time physics simulation for characters in XR—from dynamic full-body VR avatars, to interactive NPC and pets, to entirely interactive AR avatars and emojis.

We’ll then explain how we applied an AI model to our physically simulated character to generate what we call interactive “Motion Intelligence”. We give our character controllers sample motion data to learn motor skills via reverse engineering the motion in terms of simulated musculature. We also optimize for additional reward functions to ensure natural, flexible motion. The character can then produce the behavior in real-time physical simulation—transforming static animation data into a flexible and interactive skill. This is the first step in building a digital cerebellum.

We’ll then cover creating a “Motion Brain”. We did this by doing another level of training to stitch multiple behaviors into a control graph, allowing for seamless blending between learned motor skills in a variety of sequences.

Finally, we’ll look at advanced applications of Motion Intelligence and the product we built to create an applied AI tool for developers. We’ll cover building a basketball-playing, physically simulated character (as published in a paper for SIGGRAPH 2018) using deep reinforcement learning. These characters can control a 3D ball in space using advanced ball control skills.

Photo of Kevin He

Kevin He

DEEPMOTION, INC.

Kevin He has spent his career pushing the boundaries of gaming and engineering. Prior to founding DeepMotion, Kevin was the former CTO of Disney’s mid-core mobile game studio, technical director of ROBLOX, and senior engine developer of World of Warcraft at Blizzard. Kevin was also a technical lead at Airespace (now Cisco Systems). He has 16 years of engineering and management experience with multiple AAA titles shipped, including: World of Warcraft, StarCraft II, Star Wars Commander and ROBLOX.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)