Publishers of technology books, eBooks, and videos for creative people

Home > Articles > Digital Audio, Video

  • Print
  • + Share This
This chapter is from the book

3D Matchmoving

Once a solid 3D tracking solution is exported from a 3D tracking application, creating a 3D matchmove involves little more than setting up a scene and importing the solution into your 3D or compositing application of choice, as shown in Figure 4.43 and Figure 4.44. (See Chapter 3 for much more on matchmoving.)

Figure 4.43

Figure 4.43 3D tracking solution and point cloud in PFTrack

Figure 4.44

Figure 4.44 The same 3D tracking and point-cloud solution imported into a 3D application (which depicts tracking points from the point cloud as null objects in the 3D application)

Advanced 3D Tracking Strategies

There are many times when it is extremely helpful to know some advanced 3D tracking strategies as well.

Hand 3D Tracking and Matchimation

Unfortunately, as is usually the case in VFX, 3D tracking is often not quite as simple as autotrack, autosolve, autoorient, and export. Tracks contain too much noise or too many errors or they just downright fail altogether. In these cases, as with 2D tracking and matchmoving, you need a fallback strategy.

Very similar to the hand-tracked 2D track in Chapter 3, when all else fails, you can hand track, or matchimate, a 3D track as well. Matchimation is derived from the combination of matchmove and animation and refers to the process of manual frame-by-frame or keyframe matching a track.

To hand track a 3D scene, you first want to create 3D reference stand-in objects for any scene elements with known sizes and/or positions. You are basically trying to replicate key elements of the scene in your 3D application. Elements nearest to the 3D CG object you intend to place into the scene are the most important to place, if possible. In Figure 4.45 you can see a dolly shot sequence filmed on a bluescreen set, which will become an air traffic control radar monitoring station in this example.

Figure 4.45

Figure 4.45 Bluescreen VFX sequence to be hand 3D tracked

Load the footage into the background of your 3D application, making sure the footage size and aspect ratio is set correctly in both the background and the scenes camera. Set your 3D models to wireframe view mode so that you can easily see through them to the footage behind as well as the wireframe edges outlining your elements.

Since I know that we cut the tabletop portions of the “radar stations” to 30 inches wide and left them at their full 8 foot, plywood length, there is a base measurement to start with to build a reference object in your 3D application. In Figure 4.46 you can see two of these, laid end to end, to represent the two workstation countertops. Let’s eyeball the height of these countertops and place them at about 27 inches (the height of my workstation desk, which seems about right). Next, using the camera VFX cues you can ascertain (discussed in Chapter 1), set your camera to a fairly wide focal length and place the camera’s starting position at about 30 inches off the ground and approximately 10–12 feet away from the subject, as shown in Figure 4.46.

Figure 4.46

Figure 4.46 Camera placed at guesstimated starting height and position

Align the wireframe with the counter at whatever point in the shot you choose. Remember, it’s perfectly acceptable to work from beginning to end, end to beginning, middle forward and back, and so on. Keep in mind the information you can deduce from the scene—such as that the camera appears to be on a dolly so will likely translate in a straight line, even if it pans about on its Y axis. Move the camera in a straight line on its local axis until the end (or furthest point) of the shot and pan the camera until the counters and the wireframes align, as shown in Figure 4.47. Set your camera’s first keyframe here.

Figure 4.47

Figure 4.47 Camera aligned with scene element

From here, it’s the same procedure you followed for the hand 2D track, only in 3D. You will move your camera along the guesstimated path to the point where the 3D scene element you’re tracking diverges the farthest from the wireframe before either beginning to return or changing directions. This will be your next keyframe position, and you will realign your camera until the stand-in and on-screen element are aligned, then set your next keyframe, and so on (Figure 4.48).

Figure 4.48

Figure 4.48 Camera aligned with scene element to next keyframe position

Then simply repeat this process until the wireframe and scene elements are locked throughout the duration of the shot.

Once this is completed, any object added to the scene—once composited and properly integrated, color corrected (covered in Chapter 5), and rendered—should composite and integrate nicely, follow the motion of the scene, and appear to actually exist within the scene, as shown in Figure 4.49.

Figure 4.49

Figure 4.49 Integrated 3D air traffic radar workstation set piece

3D Object Tracking

If we defined 2D stabilization as simply 2D motion tracking data of a piece of footage, inverted and applied back to that footage, you can think of the inversion of 3D camera motion track data as object tracking. Where the output of a 3D camera track is a static scene and a moving camera, the output of an object track is a static camera and a moving object or scene. This technique is particularly useful in cases such as adding 3D prosthetics or props to moving characters, covered in detail in Chapter 7.

Motion Control and Motion Capture

Finally, no discussion of matching camera movements would be complete without discussing motion control and motion capture.

Motion control is the utilization of computer-controlled robotics (Figure 4.50) to very precisely create, record, and repeatedly play back camera movements over and over again. This allows for the combination of complex slow motion, or replication shots, such as adding clones of the same character to the same scene all within a continuous moving camera shot. On the pro side of this technique, motion control shots are very precise, align perfectly, and allow amazing seamless integrations. On the con side, motion control robots are expensive, huge, slow, and unwieldy and take a lot of time to set up, rehearse, and tear down.

Figure 4.50

Figure 4.50 3D illustration of a motion control camera rig

Similarly, motion capture, though not actually camera tracking either, is the capture of object motion data (as you would get with an object track) via various forms of data capture ranging from optical to wireless sensor arrays, as shown in Figure 4.51.

Figure 4.51

Figure 4.51 Wireless motion capture sensor camera rig

Motion capture is mainly used for the recording of lifelike organic character motions and interactions, and although used extensively in VFX for 3D CGI character and digital doubles, it is more in the realm of 3D character animation than VFX and compositing.

Now that you understand the basics of VFX, 2D, and 3D, let’s jump right in and begin integrating some CG VFX in Chapter 5.

  • + Share This
  • 🔖 Save To Your Account