Lens Distortion and Anamorphic Padding White Paper

This "white paper" shows some of the workflow details involved with lens distortion and anamorphic plate handling, where you may need to repeatedly modify initial settings. This is an advanced topic; you should already be familiar with the operation of the image preprocessor and the two-pass lens workflow.

You may be given anamorphic image formats, or other images, where the image's optic axis is not in the center of the usable portion of the image. The optic axis is an important point: it is the vanishing point to which infinitely far-away things head, and it is also the point about which lens distortion appears. By far, it is a simpler and more reliable approach for software to always assume that the center point is in the center of the image, and most applications will assume or require that, SynthEyes among them. When you must handle off-centered images, the workflow calls for centering the images by padding them along one side, and using that padded version for tracking, 3-D rendering, and compositing. In the final compositing stage, the padding is removed to produce an image that matches the original . (Depending on the workflow, padding may be removed from the CG elements and compositing done against the original images.) The padding workflow is the same as, and part of, the workflow for handling lens distortion.

If we were to have a control that magically allows cameras to be de-centered, there would be very substantial costs to that in performance on EVERY shot in EVERY part of the program, making the everything more complex and less reliable, even more so when interactions with other programs are considered. By centering all images, operation is more transparent and correct operation is assured even with non-off-centering-savvy programs (ie most of them). The padding done by the image preprocessor does not result in more storage being used inside of SynthEyes for the RAM cache, and when the padded images are written to disk in an image format that allows lossless compression (eg the run-length coding of the Targa format) there is little impact on the amount of disk required.

The tracking data stored within a SynthEyes .sni file always corresponds to the image being output from the Image Preprocessor, not to the image as it is read from disk. The reason for this is simple: it is the image preprocessor output that is cached in the RAM queue, displayed on screen, and examined as part of tracking (supervised or automatic). The saved data corresponds to that (output) image, so that trackers can be immediately displayed during screen redraw, hit testing, etc. If tracking data was stored vs the original image (which isn't even what the tracker position corresponds to), then SynthEyes would have to convert tracker data from the input-image locations to the output-image locations, over and over and over for each redraw etc. And when blip data is considered, there can be gigabytes of data to be converted. So our approach is quite practical.

When you make a change in the image preprocessor geometry parameters (padding and distortion), and you already have tracking data (supervised or automatic), you must do so carefully to make sure the trackers still stay attached to the same features despite the change to the format of the image. You must first REMOVE the effect of the image preprocessor with the OLD settings, then APPLY the effect of the image preprocessor with the NEW settings. The Apply and Remove operations are performed using the Output tab of the image preprocessor. This same process must be used with the Lens Workflow button, which does an Apply as part of its work.

Here is a simple example of a portion of a lens distortion and padding workflow that you can try:

  1. Open a shot (eg the flyover example)
  2. Auto-track the shot
  3. Open image preprocessor
  4. Add a substantial sound-track padding on the right side of the image (or left, top, etc)
  5. Add lens distortion, concave or convex
  6. Close the image preprocessor
  7. Click on Lens Workflow
  8. Select the redistorted two-pass workflow
  9. Hit OK.
  10. Trackers have jumped to the correct corresponding positions in the image --- the 2-D positions of them so far
  11. Run a Refine cycle. Now the 3-D positions of the trackers are in the right place also. (Depending on the degree of distortion and padding, the solution may be quite different; if so, a regular Automatic solve may be a good idea.)

Now here's something very similar (somewhat more condensed) :

  1. Open a shot (eg the flyover example)
  2. Add sound-track padding on the right side of the image (or left, top, etc)
  3. Auto-track the shot or supervised etc
  4. Enter a lens distortion value on the image preprocessor, concave or convex
  5. Click on Lens Workflow, select the re-distorted two-pass workflow
  6. AACCKK! Nothing matches up!

What happened? Well, when the lens workflow was run at step 5, there already was padding and lens distortion in place. And muddying the waters a bit further, the tracker positions corresponded to the presence of the padding and the absence of the lens distortion. So you get a big mess.

Here's the proper version:

  1. Open a shot (eg the flyover example)
  2. Add sound-track padding on the right side of the image (or left, top, etc)
  3. Auto-track the shot or supervised etc
  4. On the Image preprocessor's Output tab, select "Remove f/Trkers" to remove the effect of padding (and any earlier distortion) from the tracking data
  5. Add lens distortion, concave or convex
  6. Click on Lens Workflow, select the redistorted two-pass workflow
  7. Now it matches up

If you need to change the lens distortion value another time, pick up again at the new step 4 and remove the effect of the padding and existing distortion BEFORE changing the distortion (or possibly Undo back to the appropriate point).

Advanced Topic

The discussion above assumes that the off-centered images are supplied for work, and delivered as a result, as is typical for film work. However, off-centered images can be disconcerting to look at if they are far off (some Google "satellite" views that are really from planes, for example). The SynthEyes image preprocessor can be used to create centered images. To do that, they must be padded as above, tracked, and a Field of View determined. The preprocessor's Adjust tab can then be used to zoom slightly and re-aim to the valid portion of the image to fill the frame. By its nature, the image preprocessor will only produce centered images.

SynthEyes easily is the best camera match mover and object tracker out there.

Matthew Merkovich

More Quotes