Camera settings for videography differ from settings a photographer might use when shooting stills, but many of the fundamental issues are the same. If you haven’t yet read Chapter 3, I recommend going back and reading it before you continue. Chapter 3 includes a lot of discussion about the current state of integrated camera drones and the cameras used in aerial imaging. Many of the settings and techniques used in stills capture also apply to videography.
Traditionally, digital cameras have treated video and stills separately, and pretty much all cameras feature distinct video captures modes with their own saved settings. Videos add time to the capture equation, and camera settings need to be evaluated for effectiveness both in a single frame and across multiple frames over time.
Video Size and Frame Rate
In consumer cameras, video size is typically described using between two and four variables. Most cameras include the resolution in pixels (horizontal and vertical or just vertical) and the frame rate. Frame rates can range from 24 fps to 240 fps or so, but some options might require changing a camera from NTSC to PAL, or vice versa. These two options seem like they should have gone away long ago but are still specified for output compatibility with video monitors and broadcast equipment in different global regions.
The additional two variables are usually scanning method, specified often by a p or i appended to the end of a vertical resolution number or frame rate; and less commonly, bit rate, which describes how much video data is stored per second.
Here are some common options for each variable:
- Resolution: 4K/UHD (4096x2160 or 3840x2160), FHD 1080p/i (1920x1080), HD 720p (1280x720), 480p (640x480)
- Frame rate: 24p, 25p (PAL) 30p (NTSC), 48p, 50p/i (PAL), 60p/i (NTSC), 120p, 240p
- Scanning method: Progressive scanning (“p”), interlaced scanning (“i”)
- Bit rate: A number in megabits per second (Mbps), not to be confused with megabytes per second (MBps). A lowercase b specifics bits, and an uppercase B specifies bytes. There are 8 bits in 1 byte (and 4 bits in 1 nibble). I’m not making this up.
It might be easier to look at a common camera drone’s video size options, as shown in FIGURE 4.10.
FIGURE 4.10 The DJI Phantom 3 Professional’s Video Size options
The Phantom 3 Pro, which has been set to NTSC mode here, offers various combinations of 4K at 24/30 fps, and 720p/1080p at 24/30/48/60 fps. Frame rates of 25/50 replace 30/60 fps when the camera drone is in PAL mode.
All of this can be pretty confusing for someone who is new to video, so I’ll make some recommendations for the various sizes and frame rates I like to capture. These settings are not camera-dependent, although a camera must obviously support the resolution and frame rate in order to select the size. There are some specific settings I like to use when I shoot GoPro cameras, which can be found in the “GoPro” section later in this chapter.
- 4K/30 fps: This is my default capture resolution and frame rate. It’s the highest resolution for video capture, and 30 fps plays back on computers and the Web in a pleasing manner. Capturing 4K video gives you plenty of resolution for exporting frame grabs to be used as still images and provides plenty of resolution latitude for video editing if you are targeting 1080p for output.
- 4K/24 fps: I shoot 24 fps when I want my footage to match other footage shot at 24p, which many people consider to be more cinematic in look. 24p is what you’re used to seeing in the theater, but it isn’t good when the camera is moving too quickly and can easily look jerky during such movements. In aerial video, the camera is almost always moving, so I tend to avoid 24p as much as I can. Also, most computers, phones, and tablets use a 60 Hz refresh rate, and if you do the math, 24 doesn’t fit into 60 evenly. This means that playing a video back at 24 fps on most devices will result in playback artifacts even if you shot your video perfectly.
- 1080p/60 fps: I like to shoot at 60 fps because it produces extremely smooth video, especially when played back on a computer, or via YouTube on Chrome, which has supported 60 fps since late 2014. 60 fps is the native frame rate of most playback devices. It’s also extremely versatile; it can be converted easily to lower-frame-rate video and can also be played back at a slower frame rate, which results in slow-motion video.
Exposing for video is completely different from exposing for still imagery. In still imagery, the goal is usually to capture a frame that is as sharp as possible (unless you’re using blur as a creative effect). When you’re shooting video, you actually want some motion blur to avoid a staccato effect in motion during playback. The general rule is to expose at half your frame rate, so if you’re shooting at 30 fps, expose each frame for 1/60 second. If you are shooting at 60 fps, target a shutter speed of 1/125 second. Some people call this twice the frame rate because the number is doubled, and it’s also referred to as a 180° shutter angle for historic reasons relating to how shutters worked when exposing film.
Shooting at half the frame rate gives each frame a bit of motion blur, resulting in cinematic motion, considered to be natural in look. Longer exposures yield more motion blur and can make motion appear to be “smeary.” Short exposures will stop motion in each frame, making movement look staccato and strobed.
Most videographers shoot during the day using neutral-density (ND) filters, which block light out to allow for longer exposure times (FIGURE 4.11). A camera shooting at ISO 100 with a lens at f/2.8 might typically require a four-stop ND filter (also called an ND16 because 1 stop = 2x the light; 4 stops = 2^4, or 16x the light) to cut out enough light to hit shutter speeds of 1/60 second. If video is your main goal in learning how to use a camera drone, you should invest in a set of gimbal-compatible ND filters that vary between one and four stops. Polarizers also cut some light out and can be used on their own or in conjunction with an ND filter.
FIGURE 4.11 A set of six filters from Polar Pro, including ND, polarizing, and combination filters
At the moment, it’s mostly heavy-lift drones that carry land cameras that feature adjustable apertures. Most integrated and small camera drones use cameras with a fixed aperture of f/2.8, which is considered fairly light-sensitive. A camera shooting at f/2.8, ISO 100 will shoot only at 1/60 second without overexposing the scene after the sun has set. Before sunset, shutter speeds need to be much faster for a proper exposure. Getting to 1/60 second (or 1/125 second, if you’re targeting 60 fps) can be tricky, especially if you’re working with rapidly changing light levels, but you can also use ISO as a variable in exposure. Getting down to 1/30 second at ISO 100 means that you can shoot at 1/60 second at ISO 200, at the cost of a bit more noise in your footage.
Not having aperture as a variable is a temporary problem and is likely to change quickly as more sophisticated cameras begin to make their way into small and medium-sized integrated camera drones. DJI’s new Zenmuse X5 Micro Four Thirds camera, for example, features both interchangeable lenses and adjustable aperture (FIGURE 4.12), which means that photographers will again be free to use aperture as a variable in exposure and could achieve 1/60 second even on a bright-but-overcast day at around f/11, ISO 100. If pushed, an even smaller aperture could be used, but most videographers don’t like to stop down too far because the whole image starts to suffer from diffraction softness when the aperture becomes too small. At f/11 and a sensor resolution of 16 megapixels, a shot taken by a Micro Four Thirds camera is already theoretically beyond its diffraction limit.
FIGURE 4.12 DJI’s new Zenmuse X5 gimbal and Micro Four Thirds camera (prototype, with 15mm lens) brings interchangeable lenses and medium-sized sensors to integrated camera drones.
In the real world, diffraction needs to be balanced with depth of field and potential image degradation from ND filters, so you should do your own tests if you’re interested in figuring out how far you can stop down before your image starts to degrade. There is also a lot of web material about diffraction limits—it’s worth reading, if this is a topic that interests you.
Regarding exposure, locking it down and shooting manually is usually considered to produce video with a more cinematic feel; one rarely sees exposure changes during any single clip in a movie. Integrated cameras sometimes support full manual exposure (such as on DJI drones), but action cameras like GoPro HEROs typically support only exposure compensation and not full manual control. If you’re shooting a camera that will shoot only in auto exposure, the camera needs to have a good auto-exposure system and be able to transition from one exposure to another smoothly. The Phantom 2 Vision+ was the first integrated camera drone with a brushless gimbal. This offered a lot of convenience, but the camera was unable to move smoothly between exposure levels (it jumped), and as a result, any exposure change during a video essentially ruined its usability.
After all of this talk about manual exposure, ND filters, and 180° shutter angles, it should be noted that the vast majority of hobbyists and mainstream camera owners shoot video using extremely fast shutter speeds, especially during the day, and most shoot in auto exposure. Almost all smartphones use fixed apertures between f/1.8 and f/2.4, so it isn’t uncommon to have shutter speeds as 1/1000 second (or faster) in videos taken during the day. I don’t hear many people complaining about stroboscopic video; users’ tastes may be changing as a result of watching so much video with sharp individual frames.
Video shot from a modern $800 camera drone like the DJI Phantom 3 Standard often looks so good out of the box that it will be more than sufficient for most people’s needs, even when run in full auto. All of my most popular videos were shot during the day without using ND filters, and not a single person has complained about anything looking strange aside from the few who have noticed 30p to 24p conversion artifacts stemming from an editor’s creative desires.
Most stills photographers don’t think much about white balance when they’re out shooting, especially if they’re saving to raw formats, which allow white balance (WB) to be changed in post without any loss of quality. But in video, white balance matters a lot. In all but the highest-end systems, video is not saved to raw, and getting it right in camera is important. It isn’t necessarily true that white balance can’t be changed in post, but it’s hard to color-correct when white balance is changing over time.
Most compact cameras used on camera drones do not have sophisticated auto white balance algorithms, and white balance can swing wildly as compositions change within the time and composition of a single clip. Correcting against a moving target is not easy, and it’s almost always better to capture using a fixed white balance setting (usually, anything but auto).
I usually set my camera to a “native” WB, if the setting is available like it is on GoPros, for example (FIGURE 4.13). Native WB settings are designed to allow the most information to be captured and stored, giving videographers the maximum latitude to color-grade in post. If a native setting isn’t available, choosing something close to the current conditions is good enough. Most cameras offer options including “sunny,” “cloudy,” “incandescent,” and custom white balance, which allows you to either point the camera at something that it should consider to be white, such as a white or gray card, or dial in a specific color temperature in degrees Kelvin.
FIGURE 4.13 White balance options on a GoPro HERO4 Black
Correct white balance is more critical if the footage you are shooting needs to be used immediately, either via a live broadcast (which I’ll discuss later in this chapter) or when timely delivery is critical. In those cases, choose a white balance that looks good out of the camera.
Shooting in slow motion might seem like something reserved for dedicated high-speed cameras, but support is actually built into most recent cameras—even the ones found on relatively inexpensive camera drones. “Slow motion” really just means that footage can be reproduced at slower than real time. Since most video is played back between 24 and 30 fps, any capture at a faster frame rate than that can be played back in slow motion with no loss of quality. For example, video shot at 60 fps played back at 30 fps results in playback at 50 percent of real time, which is convincing as slow-motion footage.
The iPhone 6 and action cameras like the GoPro HERO4 Black can shoot as fast as 240 fps, and Sony’s new RX100 IV point-and-shoot camera can capture video at a staggering 960 fps. In these relatively inexpensive cameras, there is typically a compromise between frame rate and video quality. Cameras that were designed to shoot at high speed are expensive and heavy because they have the sensors and imaging pipelines required to capture high-quality, high-speed video.
Although slow motion is most obvious at 60 fps and faster, even shooting at 30p and slowing playback down to 24p (a 20 percent slowdown) can be effective in making movement seem more relaxed or regal.
Video capture at 60 fps+ is no longer a hurdle, but showing the video in slow motion can be difficult if you aren’t familiar with editing software. Most editing software, even free and entry-level programs such as iMovie and Windows Movie Maker (or GoPro Studio, for GoPro users), have rudimentary controls for slow-motion playback. iOS and Android apps make it easy to convert high-frame-rate video into slower-playback versions, but they aren’t really designed (yet) to work with videos that weren’t shot using the host device. For more advanced control of slow motion, I recommend subscribing to Adobe’s Creative Cloud and learning Adobe Premiere Pro CC, which has tools sophisticated enough for full film production (FIGURE 4.14).
FIGURE 4.14 In Adobe Premiere Pro CC, interpreting a clip as 29.97 fps instead of 59.94 fps results in slow-motion playback at 50 percent real-time.
In terms of camera motion and action, it’s usually more effective to fly a drone quickly and to capture motion that is also happening quickly. Attention span is a rare commodity these days, and if your clip doesn’t fit into 10 to 15 seconds, you’ll likely have lost the majority of your audience by the end.
If you decide you’re interested in capturing slow-motion video from the air, shoot at the frame rate that gives you the slowdown you desire, and familiarize yourself with the relevant adjustments in post-production using your chosen video editor. Once a video is slowed down in post and exported at normal frame rates, it will play back in slow motion everywhere, even if you upload it to YouTube or Vimeo for sharing.
Here are two links to sample slow-motion clips:
- Aerial rocks and surge, played back at 50 percent speed (shot with a DJI Inspire 1 at 60 fps and played back at 30 fps): http://ech.cc/droneslomo
- Ocean sparkles (land-based), shot with a $5,495 Edgertronic high-speed camera at 700 fps and played back at 30 fps (23x slower than real time): http://ech.cc/oceansparkles
Capturing video, especially in 4K, is data-intensive, and some of the small cameras used in consumer camera drones capture video at up to 60 Mbps, which is 7.5 MBps. At that rate, an 8 GB media card will fill up in just under 18 minutes, and obviously, the media card needs to be able to sustain a write speed of at least 7.5 MBps. Some cameras will detect slow media cards and display a warning, but not all do. Most cameras will do just fine with an SD or microSD card that is labeled “UHS-1” or “Class 10,” both of which indicate that they can sustain writes of 10 MBps.
If you’re using higher-end cameras that capture video at bit rates higher than 60 Mbps, you will need to use even faster media.
Professional Capture Modes
Some consumer and prosumer cameras feature capture settings that are designed to retain the most information possible when shooting video so editors have the latitude to make adjustments in post. These capture formats usually bypass filters that are applied to video to make them look good to humans, resulting in video that is unsaturated, unsharp, and flat in appearance. A good editor can then shape the video into the desired style for the final product.
Both GoPro and DJI have professional capture modes built into their cameras; I’ll talk more in detail about these modes later in this chapter.
Chapter 3 contains a great deal of information about the benefits and disadvantages of using integrated camera drones for aerial imaging. For video, one of the obvious benefits is that pilots can both start and stop video at will, capturing only footage that was meant to be saved. Full manual exposure and remote control of deep camera settings are other important benefits, as is integrated FPV.
Another difference is that the lenses currently used on camera drones like the DJI Phantom 3 and Inspire 1 are less wide than those commonly found on action cameras like GoPros. Most action cams use ultra-wide fisheye optics (FIGURE 4.15) because the cameras are designed to capture people doing things close to the camera (the cameras are often mounted on the people themselves to capture the user’s point of view).
FIGURE 4.15 A GoPro camera doing what it was designed to do: capturing content close to the camera. Wide-angle, fisheye distortion makes the horizon bubble-shaped.
Fisheye optics allow for a large field of view (FOV) using small optics, and the combination of wide-angle optics and small sensors provide a deep depth of field in which almost everything is in focus at the same time. In contrast, the DJI Phantom 3’s camera has an FOV of about 94°, which is similar to a 20mm, rectilinear lens on a full-frame camera (as opposed to the 170° lens on the GoPro HERO4). Rectilinear lenses yield straight lines with little to no barrel distortion and look more natural. Most serious aerial GoPro shooters remove distortion during post-processing or shoot in modes that are less wide, sacrificing some resolution in the process.
This lens difference is likely to be temporary; some of the latest action cameras are available with lenses that are less wide with less barrel distortion, and there will probably be many models to choose from in the future. But at this moment, one of the benefits of using integrated cameras is that the lenses yield more natural-looking images out of the camera.
I mentioned earlier in this chapter that the remote control of camera settings is an important feature of integrated camera drones. Using a DJI Phantom 3, you can, in a single flight, shoot a bunch of still images and record many videos in multiple resolutions, frame rates, and exposures, all without landing or feeling like you have to jump through hoops to change settings. The user experience in well-designed camera drones should make you feel like you’re operating a camera in your hands, and attention to detail makes a big difference.
DJI’s LOG Mode
DJI’s Inspire 1 and Phantom 3 camera drones feature a capture mode called LOG mode, which can be accessed in the Color settings of the camera via the DJI GO app (FIGURE 4.16). Shooting in LOG mode results in the standard log-mode effects: less saturation, less sharpening, and a flat look from a gamma curve that maximizes data in the resulting file.
FIGURE 4.16 Selecting LOG mode in the DJI GO app
Competent editors can grade and sharpen this video to get it to look good, but you can also use DJI’s Inspire 1/Phantom 3 transcoding tool to add the punch back into the video, saving an intermediate file as high-bitrate Apple ProRes 422LT. Note that this makes for absolutely gigantic files—about 60 MBps for 4K footage, or 3.6 GB/minute. The transcoding tool is available from the download pages at dji.com.
Feedback from users suggests that learning to color grade in an editor like Adobe Premiere Pro yields equivalent results and a more convenient workflow than using the DJI transcoding tool to create an intermediate working file.
Both integrated and third-party cameras have been used successfully to capture beautiful aerial video, and both kinds of cameras have benefits and disadvantages. With the ever-popular GoPro models, as with any camera, it’s important to make sure your settings match what is required to accomplish your aerial storytelling goals.
GoPro’s ProTune is a professional setting that enables higher-bitrate capture (up to 65 Mbps for 4K video) and settings to control white balance, color, ISO, sharpness, and exposure compensation (FIGURE 4.17).
FIGURE 4.17 GoPro’s ProTune settings for video on a HERO4 Black
I always shoot in ProTune unless I need to deliver small files to someone immediately after a shoot. These days, computers are fast and hard drives are cheap, so there is almost no reason to skimp on capture quality.
Video Size and FOV
GoPro HERO cameras have a wide variety of video size variables that can be mixed and matched in different combinations. In the “Video Size and Frame Rate” section earlier in this chapter, I wrote about some of my favorite settings to use when shooting video from the air. GoPro cameras have another variable to consider: field of view, which can be set to wide, medium, or narrow. The various combinations of resolution and FOV settings result in a wide range of captured image quality and actual FOV. My favorite settings to use when shooting a GoPro HERO3+ Black or HERO4 Black in the air are the following:
- 4K (2.7K on HERO3+ Black), 30 fps, Wide FOV, Protune on, Native White Balance, GoPro Color, ISO Limit 400, Sharpness Low, EV Comp as needed: This is my default setting because it is the highest resolution for capture in the highest capture quality, with the exception of using GoPro Color instead of Flat. I find GoPro Color to be close to what I want my end product to look like, so I choose to allow the camera to make the video pop instead of forcing me to do it in post.
- 2.7K, 30 fps, Medium FOV, Protune on, Native White Balance, GoPro Color, ISO Limit 400, Sharpness Low, EV Comp as needed (HERO4 Black only): The 2.7K Medium setting on the HERO4 Black is a 1:1 crop from the center of the sensor and looks great. You might ask why you would shoot at 2.7K instead of 4K if the crop uses 1:1 subsampling (which would be similar to just cropping from the 4K image in post). The answer is that the available data rate is used for the 2.7K video frame, which means that video shot at 2.7K Medium is better than cropped video shot at 4K Wide.
- 1080p, 60 fps, Wide FOV, Protune on, Native White Balance, GoPro Color, ISO Limit 400, Sharpness Low, EV Comp as needed.
I have focused on the GoPro HERO3+ Black and HERO4 Black here, but you can use similar decision-making principles when determining what size, FOV, and other settings to choose on other cameras. If you do happen to be using a HERO4 Black, I highly recommend reading Cinema5D’s article “How to get best quality from GoPro HERO4 Black,” at www.ech.cc/gopro4best.
Isn’t the Earth Flat?
The fisheye barrel distortion that I mentioned earlier can easily be seen in the picture of Stanford University, which was taken with a GoPro HERO3+ Black on a DJI Phantom 2 and Zenmuse gimbal (FIGURE 4.18).
FIGURE 4.18 Fisheye barrel distortion from a GoPro
This kind of distortion is more pronounced the further from the center of the image you look. Near the top and bottom of each frame, horizontal lines bow outward aggressively, and near the left and right sides of the frame, vertical lines distort. In a still image, some folks might actually prefer this look, or at least not be bothered too much by it, and in close-up action videos in which the subject is close to the frame doing something crazy, one might not even notice. But in aerial video, the distortion is obvious, especially if a camera is pitched up or down at any point.
GoPro provides a Remove Fisheye option in its GoPro Studio desktop app, which is the most convenient way to correct for the distortion found in all videos captured by a GoPro. However, the fisheye removal process also transcodes the video, which can take a long time on slow computers and results in large intermediate files (FIGURE 4.19).
FIGURE 4.19 An aerial image of Google’s secret barge in Alameda, California, before and after fisheye distortion removal
Adobe Premiere Pro CC users can take advantage of built-in lens distortion removal presets for GoPro HERO2, HERO3 Black, and HERO3+ Black.