


"We had imagined that the system would be used where the actors would be backstage, but they wanted him onstage for the TED Talk," explains Digital Domain software engineer, Melissa Cell. However, they also knew that it would be an incredible challenge to bring the technology out of their lab for the first time. When the opportunity came to demonstrate the technology at a TED Talk-and feature the first-ever digital double to deliver such a presentation-Digital Domain jumped at the chance. Roble says that much of the team's time on the project has been spent working out the kinks and polishing the system. IKINEMA streams, retargets, and cleans up his body performance and synchronizes movements, with Epic Games' Unreal Engine rendering DigiDoug and NVIDIA GPUs enabling the rendering and machine learning processes behind the demonstration.

The Xsens MVN motion capture system is used to capture Doug's exacting body movements, of course, while Manus VR gloves capture his hand and finger movements in real time. The USC Institute for Creative Technology's Vision and Graphics Lab captured Doug's face with incredible detail, while Dimensional Imaging was tapped to capture his facial movements and Fox VFX Lab's helmet camera is used onstage.

"Over the last two years, that's what we did."īringing DigiDoug to life requires the fusion of extensive technology and expertise. "The combination of the power of rendering that's possible now with gaming engines, and the speed that we were getting out of our machine learning experiments we thought, 'Holy smokes, could we take almost all of technology that we've been developing for feature-quality work, and then put it into real-time?'" Roble recalls. The team quickly saw the potential of exploring it further. It was a different approach to the kind of work that Digital Domain already specialized in, and also paired well with other ongoing in-house R&D experiments. The possibilities for Digital Domain's tech-much like with the Xsens suit itself-are seemingly endless.Īccording to Roble, the team was first inspired to investigate real-time digital humans after witnessing a "Meet Mike" demo from facial animation specialists Cubic Motion at SIGGRAPH two years back. The results were astounding, and as seen on the real Doug's body on the TED stage, it was the Xsens MVN inertial motion capture suit that helped realize his team's vision. Only it wasn't just Doug talking to the audience-it was also "DigiDoug," a fantastically realistic digital duplicate of Roble himself, who mimicked Roble's own body and facial movements on the screen above him. Digital Domain's Digital Human Group recently showcased the early results of its work during a TED Talk, in which Doug Roble, the studio's head of software R&D, gave a presentation about the technology.
