How do you actually use helper frames?

I know all the courses are ancient by tech standards, and you feel this especially in the VFX course, but how does one exactly use the helper frames mentioned in the matchmoving chapters? Modern matchmoving software such as SynthEyes and 3DEqualizer do let you load in a 3D model of the scene that data wranglers and/or script supervisors scanned via photogrammetry or LIDAR, but not individual stills. I believe helping that photogrammetry process that’s used to generate the matchmove via extra angles/photos could be immensely helpful, and also much cheaper than a full, pre-made 3D scan, but where exactly can you load them nowadays, except for maybe the long-discontinued Autodesk MatchMover that the creators of the VFX For Directors course loved so much in like 2007 or something?

The VFX For Directors course is quite old, but the underlying concepts haven’t changed dramatically. So if you set aside the specific software, think about the following.

We know that we can do photogrammetry on two or three photos of the same thing from different angles. If doing it manually, you’d identify the same features seen from each angle, and you’d be able to resolve all those points in 3D, because there’s only one way to solve it where those points hit where they’re supposed to as seen from very different angles. Point is only that we’re getting very reliable 3D information, because we’re seeing the scene from very different angles, even if we aren’t actually filming from these very different angles.

Now imagine that you have a very small camera move, or one that you’re struggling to matchmove, for whatever reason. If you were you tell the software the exact coordinates of just a few of the features you’re tracking, everything becomes crystal clear to the software, and you’re able to solve even an extremely impaired scene.

That’s all helper frames are, a separate photogrammetry done under more ideal conditions of photographs from very different angles, and when with the knowledge gained from that, you can lock down a much smaller or more impaired matchmove by giving secret information about the scene.

MatchMover by RealViz called this “helper frames”. But what you really need is photographs of your location from different angles in addition to the matchmove itself. Then you can do photogrammetry on those photographs separately on software A, and when you’re matchmoving in software B, you can plug in this 3D knowledge, and this can make the difference between a success and a failure.

The real lesson is to make sure to get these extra photographs of the location done as close in time to the main shot itself. It should be on the shotlist for the VFX people to do data collection immediately before or after.

Big productions have people running around with LiDAR scanners. Then you can reconstruct a point cloud after the fact and extract all the secret knowledge you need.

If you’re a director, you need to make time for this. These VFX people feel like everyone hates them for taking time to collect data, but they’re saving everybody’s butts. Even on blockbusters, data collection doesn’t get enough respect.

I hope this helps.

Per

1 Like