Tech Focus: L.A. Noire's MotionScan Animation
Depth Analysis discuss L.A. Noire's astonishing facial animation tech
We look to stereo reconstruction to generate a 3D patch per camera pair. These 16 patches are then aligned to form a single point cloud, and a mesh is generated with noise filtered out as much as we could. We would then fit a regular mesh on top in conjunction of temporal filtering to ensure smooth rendering.
The mesh sequence is textured, compressed and packaged for client to use at their chosen settings. The capture system assumes Lambertian surface such that it's viewed independent. We lit the capture volume as flat as possible to allow re-lighting in-game later in real time.
The larger consideration of any studio wanting to maximise MotionScan from the ground up is to consider how they may want to tell the story through the tech and the performance of the characters.
The main reasons not to operate above 30fps before were mainly down to cost and storage-based (capacity and write speed) requirements needed immediately for our video games projects. For the next rig, we are planning to move to higher frame rates.
Indeed, any industry where training and roleplay (with talking heads) may be needed, would benefit from MotionScan technologies.
Neither Team Bondi or Depth Analysis have used Rockstar's RAGE engine in L.A. Noire. Team Bondi developed their own engine in 2004 and Depth Analysis' depression code was provided to them for it to fit within their engine. This approach is how we've been leveraging MotionScan with other new clients.
Depth Analysis works closely with developers to discuss the goals of their game and how MotionScan can be used to support their project. We would ensure necessary steps are taken to be compatible with their existing technology and vice versa - if they want to use a lot of close-ups of their characters, then yes, higher poly counts would be essential and MotionScan supports that level of detail.
I think the larger consideration of any studio wanting to maximise MotionScan from the ground up is to consider how they may want to tell the story through the tech and the performance of the characters. For this approach, they would need to integrate in shooting it more like a film, which involves a new set of considerations that other facial rigs do not pick up on. Because with MotionScan, what you shoot and see is what you get in the game - the game producer and director will need to think about things such as continuity with their actors (losing/gaining weight, getting tanned, etc), hiring great actors for the best performance, etc.
Working with L.A. Noire was a straight-forward process as we wanted everyone to look like themselves. The system is like filming in 3D; what you see is what you get. It was faster in our experience to shoot more variations than it is to touch up in post-production animation later.
Many customers keen to use MotionScan have already asked for retargeting and we're currently looking into it. As MotionScan strives to capture and present the most authentic capture, it would be tough for something like an alien face, because who can definitively say how an alien face is supposed to behave? But yes, we are looking at non-human capture and the challenges around presenting that.
We would have loved to have spent more time on fine-tuning that for L.A. Noire but it wasn't feasible due to the scope of the scripting and talent involved. Moving forward, we will be developing full body capture and so anticipate that this will no longer be an issue once that technology is ready for commercial use.