Anamorphic Distance: what the heck?


 

Discusses anamorphic distance, an optical feature of anamorphic lenses with significant, but little understood, impact on matchmoving and graphics, which can explain a number of their unusual properties. Shows how SynthEyes can calculate it, how it can be compensated for, what that looks like, and how it can be handled in downstream applications.

Script for Search Engines:

Hi, this is Russ Andersson. Welcome to our tutorial on anamorphic distance. If that had been a old anamorphic lens, you might have seen me as tall and narrow in the distance, then with a wide face in close-up. That was called "anamorphic mumps"; you might have heard it was cured, not by a vaccine, but by a Panavision lens redesign. But here's a quote from cinematographer David Mullen: "Panavision solved this by making the compression error occur in what's out of focus as you focus closer and closer, so that the out of focus background gets compressed more than two times." If you're a match mover, I should be hearing a big uh-oh about now. Panavision didn't fix the problem, they moved it into the background, which is exactly what we're usually tracking. We're going to take a look at where this problem comes from optically, and what you can do about it in SynthEyes, using anamorphic distance solving. This is a bleeding edge feature in SynthEyes 2304 that's going to BLOW YOUR MIND. Watch this video, and you'll get a better understanding of anamorphic lenses, even if you don't start using this capability. So what's going on? Anamorphic lenses can have two different nodal points, one for the horizontal direction and one for the vertical. You can see that here, where you see horizontal rays intersecting at one point, and vertically aligned rays intersecting at another. This is a simplified drawing, of course; in reality an anamorphic lens has multiple groups of multiple lenses, which can all shift around individually. Some of the lenses have different curvatures in the vertical and horizontal direction, and those can combine to create the two different nodal points you see here. The details aren't important to this, just the fact we have two different nodal points. The distance between the two nodal points is the anamorphic distance. It can be positive or negative, producing different effects on the image. A positive distance, measured from the horizontal nodal point, gives you wider faces up close, because the face is closer to the horizontal nodal point than the vertical. You can also consider this to be a zoom with vertical shrinkage. With a negative anamorphic distance, the vertical nodal point is closer to the scene, and nearby objects blow up vertically. Now that you see those two different nodal points, you can see how that can explain another common anamorphic lens quirk, where horizontal and vertical edges can have different optical sharpnesses. You can see how different motions of the two nodal points could explain why the image aspect ratio changes when you pull focus, another common anamorphic effect. Now that you've learned what the problem is, what can we do about it? Here's what we can't do. We can't use any possible 2D lens distortion to fix it, no matter how many parameters it has. A lens distortion is a 2D effect. It doesn't matter what the lens is looking at. In a two shot, as one person moves closer or further and focus stays on the speaker, the lens distortion cannot change, because nothing mechanical or even electronic has changed in the lens. With two nodal points, where objects appear in the image depends on their distance from the camera. I've heard of proprietary software using hundreds of distortion coefficients to try to match to lidar scans. When you hear that, you have to ask yourself the question: "Are we barking up the wrong tree?" Are all those coefficients just trying to reproduce the depth map? The effect of two nodal points requires a 3D solution. We need to change the basic perspective transform; you know, railroad tracks, perspective lines, vanishing points, etc. We'll take a real quick look at what anamorphic distance does to perspective. Despite the equations, what we're doing is simple and duplicates what's happening in the real world. We add the anamorphic distance to the regular distance used for vertical perspective. We put it in the vertical, rather than the horizontal, term because if people do adjust the camera mount to match the nodal point, typically they use a panning process that finds the horizontal nodal point. Also by including the anamorphic distance in the vertical equation, it won't interact with the horizontal field of view that we normally work with. Let's move on quickly from this math. We're ready to look at anamorphic distance solving in SynthEyes. We'll take our anamorphic lidar shot and give it a whirl. Anamorphic distance is only available on the anamorphic lens panels. We'll just set it to calculate and then solve. The scene has been scaled up by a factor of 10 from meters to decimeters, so the calculated value is 1.7 centimeters. The phone lens is small, but the anamorphic lens is several centimeters on the side, suggesting this value might be realistic. You can see the value calculated doesn't affect the solve very much. It is possibly spurious, resulting from all the motion blur. You want to check whether it makes enough difference to the solve to keep it, considering the downstream complexity that it will cause. We'll take a look at that shortly. On shots with focus pulls, you'll want to use "animate by frame" and "animate on keys" mode for the anamorphic distance. You'll find anamorphic distance in the graph editor directly under the camera, not under the lens distortion, because, let's say it all together, anamorphic distance isn't a lens distortion. Let's take a look at the effect of the anamorphic distance on the stairway mesh in the camera view. I'm going to set the anamorphic distance value to zero, then undo and redo so you can see the difference. If you look carefully you can see that the mesh is affected more on the closer wall on the right, than the more distant wall in the back. I've differenced those two images in Photoshop, producing this image. You can see how much further apart the lines are at right than they are in the back. Now consider that this particular set of distances and shifts applies only for this single frame. On other frames, for example earlier in this shot, the distances to these two portions of the mesh are much much different. At the beginning of the shot, these two sections are both far away and little affected. The meshes need to be adjusted to take the camera viewpoint into account on each frame. Right now, that's not a feature of the perspective calculations of other apps. In fact, in this release it's not even a feature in SynthEyes's own perspective view. The mesh won't line up the same way in the perspective view. We can use a vertex cache to store a compensated version of the mesh on every frame, taking into account the camera viewpoint on that frame. The SynthEyes exporters for Filmbox, Alembic, USDA, Nuke, and blender can produce vertex caches for every mesh when anamorphic distance is present. Similarly, all tracker positions are animated. Let's export to blender; to do that we need to run the Lens Workflow first. The exporter takes some time for this, as it needs to write out the mesh in Python and also the vertex cache. Here's the scene once blender has opened it. This looks completely normal, matching up. But if we orbit the camera to a side view and scrub, you can see a distortion rippling through the mesh and trackers. That's the effect of anamorphic distance. I'm loading that vertex cache into SynthEyes so the stairway will be compensated in the SynthEyes perspective view. We're not displaying back faces, making it easier to see the smaller and smaller shifts at further distances out in front of the camera. If you want to add meshes to the scene, you need to add them first in SynthEyes, then export them, so that you have the compensating vertex cache required to make them line up right. You might need to parent 2D graphics to specific vertices on the mesh. The large size of vertex caches also make them less than ideal for workflow. The compensation could be done directly by the animation software, like SynthEyes's camera view does. If you're a plug-in developer you might consider adding a viewpoint-dependent anamorphic distance deformer to your favorite app, or even a camera plugin that knows about anamorphic distance. Anamorphic distance is worth knowing about and looking for, but it's not for casual everyday use. Keep in mind that rolling shutter is similar to anamorphic distance in that it causes problems that can't be cured by lens distortion calculations, and rolling shutter is present in 100% of shots these days. Some anamorphic lenses may have a non-zero anamorphic distance, especially the legacy lenses that people like, while other lenses may not. Which do, which don't? We had to implement this feature to find out. If you turn it on, you're going down a rabbit hole where many things won't line up, without substantial additional effort. If you're an outside tracking artist, you probably don't want to turn it on, unless you're 110 percent sure that the client is ready and willing to handle it. If you have control over your entire pipeline and have shots where anamorphic distance is present, then it might save the day. So let me know how you do. Don't forget to subscribe for more new videos, and thanks for watching.

SynthEyes easily is the best camera match mover and object tracker out there.

Matthew Merkovich

More Quotes