Why SynthEyes Stabilization?

Certainly you can find other stabilization software. You can use most compositing software to stabilize a shot with some work. Why use SynthEyes for stabilization?  

  1. To avoid widespread incorrect stabilization using other tools;
  2. To take advantage of the accuracy of 3-D feature re-projection for deep sub-pixel accuracy;
  3. To take advantage of the fast and comprehensive automatic and supervised tracking tools in SynthEyes;
  4. To maximize image quality by eliminating redundant image re-sampling;
  5. To take advantage of the additional functionality of the integrated SynthEyes stabilizer.

We'll look at each of these in more detail in the following sections.

Incorrect Stabilization

It is sad but true: incorrect stabilization will wreck a shot, and it happens all the time. We get shots in tech support: "I can't get this to solve right." One look at the source footage and "You stabilized it, right?!" How could we tell?

The hallmark of incorrect stabilization is that, though the point of interest remains stable, the corners of the image appear to be doing some very strange things, moving in and out. This is the result of 2-D stabilization, or 3-D stabilization with an incorrect field of view.

Not only will the footage look bad, but it can not be 3-D tracked properly because the usual mathematics of a camera have been corrupted. This occurs with 2-D stabilization even if the distortion in the corners is too small for you to see visually—it will distort a 3-D solve. 

The underlying problem is that 2-D stabilization changes the image in a way that no real camera can. When you shift an image, it is not the same as pointing a real camera in a different direction, and it must be.

By contrast, 3-D stabilization simulates re-shooting the scene with a more stable camera, at the same position and lens field of view, but pointing in a different direction. To do this, it (virtually) puts the original image on a projection screen in front of the original camera, and re-shoots from the new stable camera. 

The result is "keystone correction" similar to that performed by table-top video projectors. If you take a projector, pick it up, and start moving the image around on the wall, you will see very distorted images. You need keystone correction to get a properly square image from your projector, when the image is projected at an angle to the wall. That keystone correction is what SynthEyes stabilization provides.

Without keystone correction, with only simple shifting-style 2-D stabilization, you are creating the distorted images of an off-angle projector. The center of perspective of the image is not at its actual center. Consequently, not only will you have trouble with 3-D tracking, but you will not be able to render matching images either, since renderers also require the perspective center to be at the center of the image. The larger the shift required, the larger the distortion.

For proper stabilization, you need to perform keystone correction, and to perform keystone correction, you need the correct camera field of view. Conveniently, SynthEyes calculates the exact field of view during solving, which is one of the advantages of the integrating tracking and stabilization in SynthEyes.

Feature Re-Projection

With SynthEyes stabilization, you can 3-D solve the shot, then take advantage of the high accuracy provided by the solved 3-D positions (or solved 2-D positions for nodal tripod shots). The solved positions are more accurate because they are averaged over the entire length of the shots, hundreds or thousands of frames worth.

While a traditional stabilizer may be thrown off by any (and every) bad frame, SynthEyes will re-project the 3-D position of the tracker, to find out where it should be—as a super-accurate deep-sub-pixel position. 

You can get the re-projected position of a feature even if an actor has walked in front of it, or it has gone off-screen!

You can even stabilize a position in the middle of the air, with nothing there at all! (Think of those commercials with big headlines floating between buildings in a city.) If your shot calls for the camera to orbit around a virtual object that has yet to be inserted, you need this! You can shoot the footage roughly correct, solve it, put a 3-D point at the location of your virtual object, then stabilize on an imaginary point in the middle of the air. "What great camera work!" Very cool. Try doing that with a lame 2-D tracker and stabilizer.

Better Tracking

This reason to use SynthEyes for stabilization is very simple. SynthEyes was designed from the start for tracking, and it provides the whole gamut of tools to do so, including both supervised and automatic tracking. One of the reasons we added stabilization to SynthEyes is because so many people asked to be able to use SynthEyes instead of the tracker in their other software applications. While SynthEyes's 2-D export scripts provided an initial answer, adding stabilization to SynthEyes is even better.

With SynthEyes, tracking for stabilization produces results that are faster, easier, and better. There is no need to supervise a single track of a flaky feature in a compositing package, when you can quickly auto-track the entire scene in SynthEyes, then use super-accurate 3-D points.

Image Quality

If you construct a setup in a node-based compositor that performs lens (un)distortion, sub-pixel stabilization, a small zoom, and possible image format conversion, the image will have to be repeatedly re-sampled at each stage, with an attendant image quality loss.

But in SynthEyes, because of the integrated image preprocessor, all of those operations are combined internally and require only a single image re-sampling at the end. The result: higher image quality.

Functionality

SynthEyes gives you the controls you need for technically and artistically better results, integrated together for quick and easy use. 

In traditional stabilization, you peg a single feature onto a particular location in the image. That's great for shots orbiting a point of interest, but how about shots from a racecar driving down a long road, or a long helicopter flight? Features come and go all throughout the scene; there is nothing on-screen the entire time. SynthEyes has filter-mode stabilization for these shots. You can stabilize position or rotation in either mode or not at all.

SynthEyes lets you "direct" the stabilization: you can key-frame additional motion onto the point of interest. You can re-frame a shot to focus the audience's interest, or add life to a static stabilization. You can create or remove zooms. There are many creative possibilities here.

Or, you can pre-compensate for large bumps in the footage, by allowing the point of interest to move intentionally. You can minimize the amount of zoom required to maintain a full frame, maximizing final image quality.

SynthEyes also can re-sample your imagery, to change resolution or aspect ratio. This permits correct integrated film-for-HD/-SD or HD-for-SD workflows. While once you might have used a 2-D crop for these, now you might recognize that they are exactly the same as a 2-D shifting stabilization—an uneven cropping ("pan and scan") which causes geometric keystone distortion in the resulting images. Just say no to that, and say Yes! to SynthEyes stabilization.

SynthEyes easily is the best camera match mover and object tracker out there.

Matthew Merkovich

More Quotes