Anamorphic Solve Using a Lidar Scan


 

This extensive, detailed, tutorial shows an anamorphic shot being locked to a lidar scan. The resulting solve is used as a starting point for several additional tutorials. This process can also be used for non‐anamorphic shots; for example matching architectural models to drone shots (using a standard radial lens model).

Script for Search Engines:

Hi, this is Russ Andersson. Today we're going to look at matching an anamorphic shot to a lidar scan. Most of this process applies equally well to tasks such as matching a drone shot to a 3-D building model. Here, the source images are from an iPhone 13 Pro with a Moment 1.33x anamorphic lens, and the lidar scan is from the Polycam application on the iPhone. It's not exactly a big-budget production, but the fundamentals are the same. Here's a breakdown of the procedure. It not only aligns the solve to the mesh, but uses the mesh to determine the best lens distortion values for the anamorphic lens. We'll describe each step in more detail as we perform this process. We're ready to import the model. If possible, obtain a textured mesh lidar scan, rather than just A textured mesh makes later alignment much easier. If you are exporting your 3-D model from an animation package, you should export the 3-D model in your desired position, orientation, and scale. I'm just going to bring up the texture panel and hide the selection there This mesh has had some garbage removed from it, and it already picks up its texture map file name from the obj file, so we don't have much work to do with that. The orientation is pretty good, because the scan is auto-leveling. We'll just bring up the ground level of this middle part to match the ground level of the coordinate system. To do that, I'm just going to create a plane, then I'll just bring up our scan until the middle hits ground level. As you expect the stairways aren't completely level for good drainage. So we have our initial placement of the model. In any case if you have multiple shots, be sure that you use this same 3D model setup for each shot. We're ready to start tracking the shot. Let's hide the mesh itself for now, and we'll make some adjustments to the auto tracker parameters. We'll get fairly many trackers and we want to have quite a few per frame so that we can do the lens distortion calculations for the anamorphic lens. We're also going to change these blip sizes to compensate for the relatively high resolution of the shot and its periodic motion blur, because it is a handheld shot. So without this, the number of trackers drops near zero on some frames during rapid camera motion. So adjusting these blip sizes is a more subtle method than blurring or down sampling the shot. So here are our trackers, and one little trick here is to turn down the exposure and in a shot like this, that makes it easier to find the trackers out in all these trees over there. So it's just an easy way to to look for things, but we're going to declare victory as it is, and move on to solving. There are a lot of parameters associated with the full anamorphic lens model that we'll eventually use. Trying to solve for all of them at once at the beginning is often unstable, so we recommend using a gradual approach, starting with a few parameters of the Merged model. So let's go to the solver room. We'll select the standard anamorphic merged model and we'll calculate these first three terms. So that is our very first solve with just those few parameters. Now it's a good idea, we haven't done any real tracker cleanup, so let's just see what we've got and you'll notice that there are some pretty high errors in there on some of the things. So I'm just going to do a quick cleanup-trackers and just clean up some of the worst frames. Because this is an initial solve that just has a couple of the distortion parameters, we don't want to be trying to be too fine about killing off trackers before we've got an actual accurate solve, so I'm just going to say, hey, any error that's over 15 pixels, let's get rid of it on that particular frame just to take out a bunch of those really high peaks there. So that just makes it a little bit cheerier to make sure those peaks don't really throw things off quite that much. So with that let's go back to our solver and now we're going to start computing with all the rest of these here. And furthermore we'll go over and compute these other anamorphic parameters as well, and the sequencing of this is all open to interpretation and experimentation on any individual shot. If you don't have that many trackers, you might have to go through these a little bit at a time, but when you do that, then it does tend to commit to some values that may ultimately turn out to be not the best, so here we did the first couple, and now we're going to get really all the rest of these in this Merged model just right now. So that runs through pretty quickly. You can see that the error has dropped a bit and now we can go and switch to the full model that has different coefficients for the X and Y polynomials. So that adds another nine different parameters to the thing; that's why we don't want to start off with that all at the beginning, but now we can just start from those initial values and go and tune them up and now you'll see that there's some notably different values between the X and Y, and the other values change a bit as well in this process also. So we're going to use this now as our initial solve for the next stage and this solve is based solely on the tracker data itself. Now we're ready to align the solve to our lidar mesh. Let's get started. We need to be able to see the lidar mesh so let's turn that back on. And we're going to work from the camera in perspective view but just before I do that I'll point out you can notice how the solve and the lidar mesh are currently in a lot different locations, so that's what we're working to address here. We'll hide the meshes in the camera view so that's not getting in the way. Our idea now is to pick out some trackers in the shot itself, in the solve, and constrain them onto the appropriate location on our lidar mesh, which we've already located. I'll point out having the lidar mesh here, rather than just a point cloud, does make this process easier. So I'm just going to pick out one of those trackers there on the lower part of the step and I'm just using the various right and middle mouse operations in the camera view and the perspective view. In the perspective view I'll use these buttons here just to make it a little easier for you to see what I'm doing. So you can see that tracker matches up on the mesh right over here, so we're going to go into Place mode, which lets us put something onto a mesh, and we'll just drop that onto that corresponding location. You may notice that the lighting is a bit different between the scan and the live shot. There's some clouds going by; it makes this matching up process a little more difficult in spots. But there we go to start with, and now we'll go and repeat that for the next one. So again we'll select a tracker and we'll just put it onto the mesh at the corresponding location. Now with those two initial ones that's good, but now we need to move further down into these stairs, so that we have an adequate constraint over the length of the solve and over the length of the lidar scan. That's going to make it require a bit more moving around to track down the right spot. So I'll pick out this tracker there. Let's see. Pick a little less blurry spot. There it goes. Now move even further into that top step. It's a little tricky one. We've got a couple different things going on in the perspective view, 2-D pan and whatnot, but here we go. It's another tracker. Oops, we're back in navigate mode. Let's go back into Place and so that goes out there. So now we have four different trackers throughout the length of the shot. We're ready to go to the solver again and actually line it up and at this point we're going to keep the Constrain check box off. All we're looking to do is a basic alignment of the existing solve to the lidar mesh, and you can see in the views up there that we've now done that. Once the initial lidar alignment has been done, it's easier to place additional trackers. We can work exclusively in the perspective view. As we scrub through the shot, we can use backslash to toggle Solid Meshes on and off, switching to wireframe to see the original images, and then to the textured solid mesh mode to see the lidar scan. That makes it easy to spot misalignments between the shot and scan, though we need to ignore any lighting and shadowing changes. We want to place additional trackers, so that the distortion coefficients can be updated to make the solve match up better. We look for the feature a tracker is tracking in the real image, then look for that feature on the mesh surface. Shift-select to grab a tracker, then Place it on the mesh. You may need to add supervised trackers to particularly important locations that you want to constrain. If you only have a point cloud, this step will be more difficult and less accurate, since you can only place onto individual lidar points then, not an actual mesh, and you'll need to do more trackers. I'll place several trackers now. I've added a bunch more points; we're ready to update the solve. We turn on the Constrain checkbox so that the solve is actively forced to match the constraints, rather than only being aligned, as was initially the case. The hpix error has increased a little at this stage, as we're pulling the solution away from its unconstrained optimum. But let's look at the Coordinates Room. Here we see all the errors. If I undo that last solve, you see initially the errors were quite high but now after this last solve they're actually quite low, and that's the distance between the solve tracker locations and their positions on their lidar mesh. The solve is fitting the mesh much better. Keep in mind that this solve is forced to match not only the real world but also the errors in the scan, and any rolling shutter effects. I can scrub through the shot and flicker back and forth to assess the match. I'll hit the J key to turn off the trackers to reduce clutter. The spots where you see noticeable discrepancies are exactly the spots where you want to position additional trackers on the mesh. If there aren't trackers there, you can add them using supervised tracking. You can keep going back to step six to do that as many times as necessary. I'm going to go place even more trackers now. I've placed more trackers, and refined the solve a bit further. As you do each solve, you should make sure that the distortion values don't become really large. You should also check that the Lens room grid doesn't develop any strange kinks. Meanwhile, in the Coordinates room, you can see that the individual errors start going up, just like the hpix value has, as the scene gets more over-constrained, even though the solve matches the scan better overall. If any of the trackers have dramatically higher error, you might want to check that it is placed correctly. Eventually, you'll either get a solve that's good enough, or reach a point of diminishing returns on your effort. Here, I should have kept the shutter time short to reduce motion blur. How good of a match you can achieve is limited by how well the lens can be described by the lens model. You also should be concerned about the accuracy of your 3-D model. The Polycam software talks about accuracy "up to 2%". That's not great for match-moving; you're matching the errors in the lidar scan. Let's look up close. You can see that the vertices aren't well placed near corners, especially on this wall. The models are better in flat areas, not where there are details. You should also be concerned about the effect of rolling shutter; it affects every shot to some degree, no matter what the camera motion or speed is. It's best to have previously determined the camera's rolling shutter value, so that calculating it doesn't interact with the solve. Either way, if a rolling shutter value is necessary for a good solve, rendering 3-D elements with simulated rolling shutter is a good idea. You should always keep sight of what is supposed to be done to the shot, to avoid concentrating on unnecessary features. In some cases, you might consider rigging up the lidar scan for Geometric Hierarchy tracking, driven by some trackers in the match-move, and use that to make the scan match the actual shot footage. As you've seen, matching shots exactly to a lidar scan is a substantial task requiring attention to detail. There aren't any extra special techniques required, just the use of various SynthEyes tools in an organized way. This has been a long tutorial, with a lot of steps. It's better to try to absorb the ideas first, and then how to achieve them, rather than trying to memorize individual button pushes. Thanks for watching.

SynthEyes easily is the best camera match mover and object tracker out there.

Matthew Merkovich

More Quotes