3D Motion Tracking
In Chapter 3, in the discussion of 2D motion tracking, you saw how to track one, two, and even four points on an image to record and utilize the positional/translational, rotational, and apparent scaling data. I say apparent because what we are actually tracking is the points moving closer together, as shown in Figure 4.24, simulating scale, which will sometimes suffice for creating simulated Z-depth movement.
Figure 4.24 Simulated scaling as an object moves closer to or farther away in depth (Z-space) from the camera
Many times this will be sufficient enough data to allow you to lock your element to the plate so you can create a seamless integration. But what happens if the camera is orbiting around or within a scene, or is moving through a scene at an angle, allowing you to see around objects in the scene as you pass them? New VFX artists frequently want to know the dividing line between when a 2D track is enough to make a shot work and when a 3D track is required. Well, this is it. Any time the camera orbits around or within a scene or translates/passes objects within a scene in Z-depth close enough to reveal the 3D nature of a subject or object (or reveal a portion, or portions, of those objects that weren’t seen originally)—as the examples in Figures 4.25–4.28 illustrate—a 3D track and solution is required.
Figure 4.25 XY translation-only camera movement that would work well with a 2D track
Figure 4.26 Z Translation-only camera movement that would work well with a 2D track
Figure 4.27 Orbital camera movement requiring a 3D track
Figure 4.28 Camera translation close to subject and on an angle so that the 3D nature of the object or subject is revealed.
Unlike 2D tracking, which derives its data from the X and Y motion of pixels on a flat screen, 3D tracking utilizes much more complex triangulation calculations to determine objects’ actual position and motion in 3D space. If you want to be able to integrate a 3D object into a scene where the camera is moving in three dimensions, you need to be able to re-create this camera’s motion in 3D and have your virtual camera repeating this same motion in order for your element to integrate seamlessly.
To be really good at 3D tracking (and to avoid the needless frustration many artists encounter), it’s important to understand how 3D tracking works.
The origins of 3D tracking technologies lie in the science of photogrammetry, the scientific method of calculating positions and distances of points referenced in one or more images. By comparing and triangulating the position of points referenced in multiple images (as seen in Figure 4.29), or consecutive frames of a motion image, the position of those points, as well as that of the camera, can be calculated using trigonometry and geometric projections.
Figure 4.29 Points in two images being triangulated to determine camera position
3D Motion Tracking Application Technique
A 3D tracker does its mathematical magic in a series of well-defined steps:
A mass (usually automated) 2D track, or auto track, of the scene is performed, tracking many (sometimes hundreds or thousands) high-contrast candidate (or potential) points in the scene. This first track is almost identical to a 2D track except that it is done on a mass scale on the entire image, as shown in Figure 4.30. During this process, complex software algorithms sift through all of the tracked 2D points to weed out and delete any of those that fall below a user set confidence threshold (meaning how confident the software is that the point being tracked is the same on each frame or range of frames).
Figure 4.30 The first step in a 3D track is a mass, automated 2D track.
Next, a complex 3D camera solve is done. A solve is an exhaustive series of calculations wherein the motion of every point tracked is compared and triangulated on a frame-by-frame basis (usually both forward and backward) to determine its position as well as the camera’s position and any movement within each frame, as shown in Figure 4.31. The more information known about the camera used and its motion and environment, the more accurate the solve will be.
Figure 4.31 3D camera solve
Once the 3D tracking application completes its solve, it will display the resulting 3D camera, motion track, and point cloud (cluster of points representing solved candidate points). The camera’s position and track, at this point, are relative to the point cloud and not necessarily aligned with the real world X, Y, and Z axes (as shown in Figure 4.32), so the next step is to align, or orient, the scene. Most 3D tracking applications have scene orientation tools that allow you to designate a point in the scene as the X, Y, Z, 0, 0, 0 origin. Additional scene orientation can be refined by using tools that allow you to designate certain points as being on a common plane, or that allow you to manually translate, rotate, and scale the entire scene into position by eye, or by aligning to reference grids, as shown in Figure 4.33.
Figure 4.32 3D camera, track and point cloud
Figure 4.33 3D orientation and alignment of camera track and point cloud scene
At this point, most 3D tracking applications will allow you to place test objects into the scene to determine how well they follow the track (or stick), as shown in Figure 4.34.
Figure 4.34 3D test objects inserted into the 3D tracked scene
If there are any errors or errant motions in the track, you can apply mathematical filters to smooth the tracks motion. Averaging, or Butterworth, filters are common filters to accomplish this. Isolated errors or motions may also be edited or removed manually by editing, adjusting, or deleting track motion keyframes, as shown in Figure 4.35.
Figure 4.35 Editing 3D camera motion track keyframes
- Once the 3D camera track proves to be solid, the data can then be exported in a variety of file and scene formats to other 3D and/or compositing applications for use.
3D Motion Tracking Applications
There are many 3D motion tracking applications, some which come as integrated solutions in other applications, as well as the standalone application variety. Although their workflows and methodologies vary somewhat, they all contain the steps outlined in the preceding section (whether obviously or under-the-hood in the case of completely automated versions). This section introduces some of the most popular 3D tracking applications.
Originating from the University of Manchester’s Project Icarus, PFTrack (www.thepixelfarm.co.uk) (see Figure 4.36) and its sibling applications have grown into some of the most powerful and widely used 3D tracking applications in the VFX industry.
Figure 4.36 PFTrack user interface
Figure 4.37 Boujou user interface
Nuke and After Effects
3D tracking has become a commonly integrated feature in compositing applications, which continue to grow and blur the lines between VFX job descriptions. Recently, compositing applications such as The Foundry’s Nuke (www.thefoundry.co.uk) (Figure 4.38) and Adobe’s After Effects (www.adobe.com) (Figure 4.39) have also integrated 3D tracking capabilities.
Figure 4.38 The Foundry’s Nuke 3D tracking interface
Figure 4.39 Adobe After Effects’ 3D tracking interface
Even Imagineer Systems’ planar tracker Mocha Pro (www.imagineersystems.com) (as seen in Figure 4.40) has been given a turbo boost with its ability to extrapolate 3D camera tracking motion from multiple 2D planar tracks, resulting in some very impressive output where some standard 3D trackers fail.
Figure 4.40 Imagineer Systems’ Mocha Pro user interface
One of the first affordable, low-cost, 3D tracking applications, SynthEyes (www.ssontech.com), shown in Figure 4.41, has also grown in capability and features to become a powerful and widely used 3D tracking solution.
Figure 4.41 SynthEyes user interface
The University of Hannover’s Laboratory for Information Technology developed this free non-commercial 3D camera tracking software. Voodoo (www.viscoda.com) (Figure 4.42) is an excellent tool for beginners to use to experiment with 3D camera tracking at no cost.
Figure 4.42 University of Hannover’s Laboratory for Information Technology’s Voodoo user interface