Publishers of technology books, eBooks, and videos for creative people

Home > Articles > Digital Audio, Video > Video Editing

The Film Editing Room Handbook: Shooting

  • Print
  • + Share This
  • 💬 Discuss
This chapter covers a brief history of video editing, three primary workflows, dealing with metadata, and preparing, synching and screening dailies.

The actual process of production is going through tremendous changes today. Crews may be shooting on 16mm or 35mm film, or they may be capturing on video. If they are shooting video (either digital or analog), they may be recording to tape or directly to some kind of computer file format. Regardless, everything that we do in the editing room today goes through digital video in some way, no matter what the final finishing format is going to be.

The big difference between film and video formats is that film generally moves through a projector at either 24 or 25 frames per second. It is either 16mm, 35mm, or 70mm wide. And that's it.

Video is another story entirely. There are more video formats than any one person can possibly keep track of. There are varying frame rates, such as 23.976, 24, 25, 30; interlaced and progressive display formats; different international standards such as NTSC, PAL, and SECAM; various forms of compression and decompression; and different forms of time codes.

In short, it's overwhelming. Creating a project in one format, when you'll be working with another (or with more than one), can lead to all sorts of editing issues. Taking a little historical tour of video editing helps put all these formats into a less intimidating perspective.

A Brief History of Video Editing

Years ago, when videotape became popular for television storage and viewing, shows were first edited on film in the old-fashioned way and then transferred to video for airing. Eventually, however, some producers (especially of live and news shows) wanted to reduce the expense of finishing on film by shooting and editing that videotape. At first, videotape was edited like film—a razor blade was used to slice the tape into small pieces. The individual strips of videotape were then taped (or spliced) together.

There were countless problems with this method. First, because it is impossible to see an image on tape, making edits exactly on the "frame line" was difficult. Second, if the editor wanted to change a cut, it was hard to splice consecutive frames back together without getting unwanted interference.

To solve these problems, clever engineers came up with another way of editing videotape. With this new system you never actually cut the dailies. Instead, you played them back on one machine while simultaneously making a videotape copy on another machine of exactly the section you wanted for that particular edit. (This copying process is called making a transfer; the video copy is called either a transfer or a dub.) Then you found the next shot you wanted and transferred that to the end of the first transferred piece. In this way, you could assemble an entire show without destroying the original tapes.

Of course, there were problems with this method. Finding the exact frame lines on which to start and stop transferring the tapes was just as difficult as making physical slices with a razor. Eventually, technicians invented a machine that could find frames and start and stop the copying process with such precision that it produced no visible interference at the point where one shot was cut to another.

Another problem with this method was that one piece of videotape looked very much like any other piece. (With celluloid, on the other hand, all sorts of identifying numbers were put on the film.) This lack of identification made it more difficult to recognize and locate particular frames on tape than on film.

The Society of Motion Picture and Television Engineers (SMPTE, pronounced "simptee") solved this problem. SMPTE developed a standard, called SMPTE code, that consists of a special series of numbers electronically recorded on the tape. This number uniquely identified each and every frame in a reel of tape. You could name a particular video frame 3:07:36:19, for example. When combined with a tape number, which you assigned, you could always find a particular frame amidst all the reels that had been shot.

Editing with SMPTE Time Code

SMPTE numbers are electronic signals, such as 3:07:36:19, buried into the normal video signal. This number is the sequential hour, minute, second, and frame from an assigned starting point. For example, the start mark for reel number three might have the SMPTE code for three hours (3:00:00:00). The first frame of the picture (which would be at eight seconds) would be called 3:00:08:00. The frame with code number 3:07:36:19 would probably run seven minutes, 36 seconds, and 19 frames after that start mark.

Now, let me explain the use of "probably."

Video footage numbers run a little different from film footage numbers. For one thing, in the United States, there are 30 frames to each second in video footage. (Film is shot at 24 frames per second in the United States and at 25 frames per second in Europe and elsewhere.) A SMPTE number of 3:07:36:29 would be followed by 3:07:37:00. Also, because of the way SMPTE code and video work, each second of video time is not precisely one second of real time. Thus, a tape that started at 3:00:00:00 and was 9 minutes and 35 seconds long would not end at 3:09:35:00. This was confusing to people who needed to know real-time lengths. As a result, SMPTE developed another code standard that dropped periodic frame numbers so, at the end of 9 minutes and 35 seconds, the SMPTE code numbers would read exactly 3:09:35:00. This type of SMPTE code is known as drop frame code and the original type of code became known as non-drop frame. Both types of code are still in use today. The code of choice, however, seems to be non-drop frame.

No matter what type of time code you use, editing systems use the SMPTE code to uniquely identify each frame of video. The editor can then find the exact frames needed for his or her cut points.

Editing Goes Non-Linear

With the invention of the electronic editing machine and SMPTE code, many barriers to effective editing on tape began to disappear. But there was still one giant problem: the linearity of the tape medium. Editing on videotape meant that whenever an edit was changed, every single edit after that point needed to be shifted earlier or later in the tape to accommodate that change. This enormous time waster often degraded the video quality as well, as editors copied the cuts from one videotape onto another.

So the engineers went back and created new editing systems to fix that problem. This is how they did it.

Most of these systems (such as Ediflex, Montage, Touchvision, EditDroid, E-Pix, and Laser Edit) worked by making numerous copies of the dailies tapes, either on videotape or videodisc. These copies were then put into many videotape or disc playing machines that were all connected to a computer.

To make an edit, the editor told the computer which frames he or she wanted to edit, say from shot one to shot two. The computer then would tell one of the video machines to play back (without transferring or recording to another tape machine) its copy of shot one until it reached the SMPTE code for the last frame the editor wanted, say 15:26:13:19. At that exact instant, the computer told a second video machine to begin playing from the SMPTE code number of the first frame the editor wanted in shot two (say, 07:53:27:07). This process continued on a shot-by-shot basis until there were no more edits to play back.

No tape was recorded until everybody was happy with the sequence. If the director wanted to add ten more frames to the end of shot five, the editor would give the computer that instruction. The computer would then tell the machine playing back shot number five to keep playing for ten more frames before switching over to playing back from the machine with shot six in it. All the machine had to do was add ten frames to the SMPTE code number that it was sending to the machine playing back shot five.

The key to making this all work was having enough machines playing identical copies of the dailies. That way, the computer could think far enough ahead so it could have those machines cued up to the exact point where the footage would be needed in four or five or six or seven more edits.

This style of editing is called non-linear editing because the film doesn't have to be edited in a linear fashion, from beginning to end, in order to view it. While it was an ingenious solution, it required a large number of videotape copies of every set of dailies. Editors were slaves to the tape machine speeds. As a result, often the playback of an edit stopped while the computer waited for a videotape machine to get to the proper time code frame to "make the edit."

So, once again, the technical geniuses went to work.

Entering the Digital Era

Since the late 1980s, the process has changed so much that we can now avoid using stacks of videotapes entirely. Instead, the picture image on each frame of film is converted into digital images that computers can read. This digitized image and sound is stored on computer hard drives and played back (rather than a video image from tape or videodisc). The computer thinking process is the same as it was in the tape or disc form of non-linear editing, but now the computer controls the locations on hard drives rather than locations on videotapes.

Because computer hard drives are much faster than videotape or videodisc machines, the editor no longer has to wait until all the videotapes are cued up in their proper places for playback. Access to any frame is virtually instantaneous.

The only weak point of digital non-linear editors is that playing back high-quality images from hard drives is very computer processor-intensive and requires massive amounts of disk drive space. One way to minimize the amount of disk space required to store each frame is to compromise in terms of image quality. The better the quality of the images, the more hard drive space needed to store the images. So, images play back better when they are compressed (or slightly degraded) to fit better through the computer's input and output channels.

With the introduction of high-definition shooting, the desire to edit high-quality images forced the developers of non-linear editing systems to come up with compression/decompression methods (called codecs) that provide better images. It is the assistant editor's job to become familiar with the details of those codecs before the first frame arrives from the set.

  • + Share This
  • 🔖 Save To Your Account


comments powered by Disqus