< Previous | Contents | Manuals Home | Boris FX | Next >

Motion Capture and Face Tracking

SynthEyes offers the exciting capability to do full body and facial motion capture using conventional video or film cameras.

STOP ! Unless you know how to do supervised tracking and understand moving-object  tracking, you will not be able to do motion tracking. The material here builds upon that earlier material; it is not repeated here because it would be exactly that, a repetition.

First, why and when is motion capture necessary? The moving-object tracking discussed previously is very effective for tracking a head, when the face is not doing all that much, or when trackable points have been added in places that don’t move with respect to one another (forehead, jaws, nose). The moving-object mode is good for making animals talk, for example. By contrast, motion capture is used when the motion of the moving features is to be determined, and will then be applied to an animated character. For example, use motion capture of an actor reading a script to apply the same expressions to an animated character. Moving-object tracking requires only one camera, while motion capture requires several calibrated cameras.

Note: The Geometric Hierarchy(GeoH) Tracking capability (described in the separate manual of that name, see the Help menu), which requires only one camera, can be used in some cases instead of motion capture, and in other cases after motion capture tracking, to produce "joint angles" for export, instead of point clouds. There are references to that here, plus see the GeoH tracking manual.

Second, we need to establish a few very important points: this is not the kind of capability that you can learn on the fly as you do that important shoot, with the client breathing down your neck. This is not the kind of thing for which you can expect to glance at this manual for a few minutes, and be a pro. Your head will explode. This is not the sort of thing you can expect to apply to some musty old archival footage, or using that old VHS camera at night in front of a flickering fireplace. This is not something where you can set up a shoot for a couple of days, leave it around with small children or animals climbing on it, and get anything usable whatsoever. This is not the sort of thing where you can take a SynthEyes export into your animation software, and expect all your work to be done, with just a quick render to come. And this is not the sort of thing that is going to produce the results of a $250,000 custom full body motion capture studio with 25 cameras.

With all those dire warnings out of the way, what is the good news? If you do your homework, do your experimentation ahead of time, set up technically solid cameras and lighting, read the SynthEyes manual so you have a fair understanding what the SynthEyes software is doing, and understand your 3-D package well enough to set up your character or face rigging, you should be able to get excellent results.

In this manual, we’ll work through a sample facial capture session. The techniques and issues are the same for full body capture, though of course the tracking marks and overall camera setup for body capture must be larger and more complex.

 

Introduction Comparison of Motion Capture and GeoH Tracking Camera Types Camera PlacementLighting Calibration Requirements and Fixturing Camera Synchronization Camera Calibration Process Matching Plan A: Temporary Alignment Matching Plan B: Side by Side Matching Plan C: Multiple Camera Views Matching Plan D: Cross Link by Name Completing the Calibration Body and Facial Tracking Marks Preparation for Two-Dimensional Tracking Two-Dimensional Tracking Linking the Shots Solving Exports & Rigging Modeling Single-Camera Motion Capture

©2024 Boris FX, Inc. — UNOFFICIAL — Converted from original PDF.