Publishers of technology books, eBooks, and videos for creative people

Home > Articles > Design

Working with Raw Files in Capture NX 2

📄 Contents

  1. What Is Raw, and Why Should You Use It?
  2. Processing Raw Images
  3. Saving Raw Files
  • Print
  • + Share This
  • 💬 Discuss
This chapter is from the book

This chapter is from the book

Ben Long explains basic concepts for editing and processing raw files and looks at Capture NX’s raw processing controls specifically for Nikon cameras.

If you have a Nikon camera that can shoot raw files, then Capture NX provides you with some important additional tools that will allow you to perform edits on your raw files and to maintain a level of quality, that simply can’t be achieved when shooting in JPEG or TIFF mode.

In this chapter we’re going to look at Capture NX’s raw processing capabilities. Unfortunately, Capture NX does not support raw formats from other cameras, so as mentioned earlier, if you’re using a non-Nikon camera and are shooting raw, you’ll have to do your raw conversion elsewhere. However, while Capture NX may not be able to handle your non-Nikon raw files, you still might want to read the first few sections of this chapter to get a better understanding of how your camera works and of why you might want to consider shooting raw. The concepts presented in this first section are true for any camera that shoots raw, and for any raw conversion software. In the second half of this chapter, we’ll get to Capture NX’s specific raw controls.

What Is Raw, and Why Should You Use It?

There are a number of reasons to shoot raw instead of JPEG—in fact, I’ve shot exclusively raw with my digital cameras for years, because I find the advantages that it provides are worth the few annoyances that raw can introduce to your workflow. In this section, we’re going to look at why you might want to make the switch to raw. To better understand how raw and JPEG differ, you need to know some of the details of how your camera works.

How Your Digital Camera Creates an Image

A piece of film is a remarkably self-contained apparatus. Nothing more than a piece of celluloid with a thin layer of chemical emulsion, film works without any power source, and with no mechanical apparatus. (The battery and mechanics in your film camera are there just to make photography easier—they’re not actually essential.) Despite its simplicity, a piece of film is both an imaging technology and a storage device. By comparison, your digital camera requires a large number of sophisticated technologies. Packed with everything from light-sensitive imaging chips to custom amplifiers to onboard computers and removable storage technologies, there’s a lot going on in even the smallest digital camera.

Just as film photographers of old had an in-depth understanding of the imaging properties of their films and chemistries, digital photographers can benefit from a deeper knowledge of how their camera works.

How a digital camera sees

From one perspective, there’s very little difference between a digital camera and a film camera. They both use a lens to focus light through an aperture and a shutter onto a focal plane. The only detail that makes a camera a digital camera is that the focal plane holds a digital image sensor rather than a piece of film.

The quality of your final image is determined by many factors, from the sophistication of your lens to your (or your light meter’s) decision as to which shutter speed and aperture size to use. One downside to digital photography is that the actual imaging technology is a static, unchangeable part of your camera. You can’t change the image sensor if you don’t like the quality of its output.

Just like a piece of film, an image sensor is light sensitive, be it a CCD (charge-coupled device) or CMOS (complementary metal oxide semiconductor ) chip. The surface of the chip is divided into a grid of photosites, one for each pixel in the final image. Each photosite has a diode that is sensitive to light (a photodiode). The photodiode produces an electric charge proportional to the amount of light it receives during an exposure. To interpret the image captured by the sensor, the voltage of each photosite is measured and then converted to a digital value, thus producing a digital representation of the pattern of light that fell on the sensor’s surface.

Measuring the varying amounts of light that were focused onto the surface of the sensor (a process called sampling) yields a big batch of numbers, which in turn can be processed into a final image.

However, knowing how much light there is in a particular photosite doesn’t tell you anything about the color of the resulting pixel. Rather, all you have is a record of the varying brightness, or luminance, values that have struck the image sensor. This is fine if you’re interested in black-and-white photography. If you want to shoot full color, though, things get a little more complex.

Mixing a batch of color

If you’ve ever bought paint at a hardware store, or if you learned about the color wheel in grade school, you might already know that you can mix together a few primary colors of ink pigments to create every other color. Light works the same way, but the primary colors of ink are cyan, magenta, and yellow, whereas the primary colors of light are red, green, and blue. What’s more, ink pigments mix together in a subtractive process. As you mix more ink colors, your result gets darker until the ink turns black. Light mixes together in an additive process; as you mix more light, the light gets brighter until it ultimately turns white.

In 1869, James Clerk Maxwell and Thomas Sutton performed an experiment to test a theory about a way of creating color photographs. They shot three black-and-white photographs of a tartan ribbon. For each photo they fixed their camera with a separate colored filter: one red, one blue, and one green. Later, they projected the images using three separate projectors, each fitted with the appropriate red, green, or blue filter. When the projected images were superimposed over each other, they yielded a full-color picture.

Your digital camera uses a similar technique—and a lot of math—to calculate the correct color of each pixel. Every photosite on your camera’s sensor is covered with a colored filter—red, green, or blue. Red and green filters alternate in one row, and blue and green alternate in the following row (Figure 4.1). There are twice as many green photosites because your eyes are much more sensitive to green than to any other color.

Figure 4.1

Figure 4.1 In a typical digital camera, each pixel on the image sensor is covered with a colored filter: red, green, or blue. Although your camera doesn’t capture full color for each pixel, it can interpolate the correct color of any pixel by analyzing the color of the surrounding pixels. This array of colors is called a Bayer pattern after Dr. Bryce Bayer, the Kodak scientist who devised it in the early 1970s.

As you’ve probably already surmised, this scheme still doesn’t produce a color image. For that, the filtered pixel data must be run through an interpolation algorithm, which calculates the correct color for each pixel by analyzing the color of its filtered neighbors.

Say that you want to determine the color of a particular pixel that has a green filter and a value of 100 percent. If you look at the surrounding pixels with their mix of red, blue, and green filters and find each of those pixels also has a value of 100 percent, it’s a pretty safe bet that the correct color of the pixel in question is white, since 100 percent each of red, green, and blue yields white (Figure 4.2).

Figure 4.2

Figure 4.2 To calculate the true color of the 100 percent green pixel in the middle of this grid, you examine the surrounding pixels. Because they’re all 100 percent, it’s a good chance that the target pixel is pure white, since 100 percent of red, green, and blue combines to produce white.

Of course, the pixel may be some other color—a single dot of color in a field of white. However, pixels are extremely tiny (in the case of a typical consumer digital camera, 26 million pixels could fit on a dime), so odds are small that the pixel is a color other than white. Nevertheless, there can be sudden changes of color in an image—as is the case, for example, in an object with a sharply defined edge. To help average out the colors from pixel to pixel and therefore improve the chances of an accurate calculation, digital cameras contain a special filter that blurs the image slightly, thus gently smearing the color. Although blurring an image may seem antithetical to good photography, the amount of blur introduced is not so great that it can’t be corrected for later in software.

This process of interpolation is called demosaicing, a cumbersome word derived from the idea of breaking down the chip’s mosaic of RGB-filtered pixels into a full-color image. There are many different demosaicing algorithms. Because the camera’s ability to accurately demosaic has a tremendous bearing on the overall color quality and accuracy of the camera’s images, demosaicing algorithms are closely guarded trade secrets.

The Bayer pattern is an example of a color filter array (CFA). Not all cameras use an array of red, green, and blue filters. For example, some cameras use cyan, yellow, green, and magenta arrays. In the end, a vendor’s choice of CFA doesn’t really matter as long as the camera yields color that you like.

A final image from a digital camera consists of three separate color channels, one each for red, green, and blue information. Just as in Maxwell’s and Sutton’s experiment, when these three channels are combined, you get a full-color image (Figure 4.3).

Figure 4.3

Figure 4.3 Your camera creates a full-color image by combining three separate red, green, and blue channels. If you’re confused by the fact that an individual channel appears in grayscale, remember that each channel contains only one color component—so the red channel contains just the red information for the image; brighter tones represent more red, darker tones, less.

Digital Image Composition

A final image from your digital camera is composed of a grid of tiny pixels that correspond to the photosites on your camera’s image sensor. The color of each pixel in an image is usually specified using three numbers: one for the red component, one for the blue, and one for the green. If your final image is an 8-bit image, then 8-bit numbers are used to represent each component. This means that any pixel in any individual channel can have a value between 0 and 255. When three 8-bit channels are combined, you have a composite 24-bit image that is capable of displaying any of approximately 16 million colors. Although this is a far greater number of colors than the eye can perceive, bear in mind that a lot of these colors are essentially redundant. RGB values of 1, 0, 0 (1 red, 0 green, 0 blue) and 1, 1, 0 are not significantly different. Therefore, the number of significant colors in that 16-million color assortment is actually much smaller.

In a 16-bit image, 16-bit numbers are used for each channel, resulting in an image that can have trillions of unique colors. Although the difference between 1, 0, 0 and 1, 1, 0, will be the same as in an 8-bit image because each pixel of each channel can have a value between 0 and 65,535, 16-bit images have a greater number of significant colors. You won’t necessarily see more colors, but these extra values will give you more latitude when editing.

The concept of color channels is important to understand because some edits and corrections can be achieved only by manipulating the individual color channels of an image. What’s more, some problems are easier to identify and solve if you are in the habit of thinking in terms of individual channels.

How Your Camera Makes a JPEG Image

By default, your camera probably shoots in JPEG mode. In fact, depending on your camera, JPEG images might be your only option. More advanced cameras will also offer a raw mode. When a camera is shooting JPEG images, a lot of things happen after the sensor makes its capture.

The camera first reads the data off of the sensor and then amplifies it. As you learned earlier, each pixel on your image sensor produces a voltage that is proportional to the amount of light that struck that site. When you increase the ISO setting on your camera, all you’re doing is increasing the amount of amplification that is applied to the original data. Unfortunately, just as turning up the volume on your stereo produces more noise and hiss, turning up the amount of amplification in your camera increases the amount of noise (ugly grainy patterns) in your image (Figure 4.4).

Figure 4.4

Figure 4.4 As you increase the ISO setting of your camera (often done when shooting in low light), you increase the amount of noise in your image.

After being amplified, the data is passed to your camera’s onboard computer, where a number of important image processing steps occur.

Demosaicing

The raw image data is first passed through the camera’s demosaicing algorithms to determine the actual color of each pixel. Although the computing power in your digital camera would make a desktop computer user of a dozen years ago green with envy, demosaicing is so complicated that camera vendors sometimes have to take shortcuts with their demosaicing algorithms to keep the camera from getting bogged down with processing. After all, a computer can sit there and work for as long as it needs, but the camera has to be ready to shoot again in a fraction of a second. The ability to make the most of the camera’s processing resources is one reason that some cameras are better at demosaicing than others.

Colorimetric interpretation

Earlier it was noted that each photosite has a red, green, or blue filter over it, but “red,” “green,” and “blue” were never defined. Your camera is programmed with colorimetric information about the exact color of these filters and so can adjust the overall color information of the image to compensate for the fact that its red filters, for example, may actually have a bit of yellow in them.

Color space

After all this computation, your camera’s computer will have a set of color values for each pixel in your image. For example, a particular pixel may have measured as 100 percent red, 0 percent green, and 0 percent blue, meaning it should be a bright red pixel. But what does 100 percent red mean—100 percent of what?

To ensure that different devices and pieces of software understand what “100 percent red” means, your image is mapped to a color space. Color spaces are simply specifications that define exactly what color a particular color value—such as 100 percent red—corresponds to.

Most cameras provide a choice of a couple of color spaces, usually sRGB and Adobe RGB. Your choice of color space can affect the appearance of your final image because the same color values might map to different actual colors in different spaces. For example, 100 percent red in the Adobe RGB color space is defined as a much brighter color than 100 percent red in the sRGB space.

All of these color spaces are smaller than the full range of colors that your eyes can see, and some of them may be smaller than the full range of colors that your camera can capture. If you tell your camera to use sRGB, but your camera is capable of capturing, say, brighter blues than provided for in the sRGB color space, the blue tones in your image will be squeezed down to fit in the sRGB space. Your image may still look fine, but if the image were mapped to a larger color space, there’s a chance that more of the colors your camera captured would be visible.

After demosaicing your image, the camera converts the resulting color values to the color space you’ve selected. (Most cameras default to sRGB.) Note that if you’re shooting raw, you can change this color space later using Capture NX.

White balance

Different types of light shine at different intensities, measured as a temperature using the Kelvin (K) scale. To properly interpret the color in your image, your digital camera needs to know what type of light is illuminating your scene.

White balancing is the process of calibrating your camera to match the current lighting situation. Because white contains all colors, calibrating your camera to properly represent white automatically calibrates it for any color in your scene.

Your camera will likely provide many different white balance settings, from an Auto White Balance mode that tries to guess the proper white balance, to specific modes that let you choose the type of light you’re shooting in, to preset modes that let you manually create custom white balance settings. These settings don’t have any impact on the way the camera shoots or captures data. Instead, they affect how the camera processes the data after it’s been demosaiced and mapped to a color space.

When you shoot in Auto White Balance mode, the camera employs special algorithms for identifying what the correct white balance setting should be. Although most auto white balance mechanisms these days are very sophisticated, even the best ones can confuse mixed lighting situations—for example, shooting in a tungsten-lit room with sunlight streaming through the windows, shooting into a building from outside, and so on.

If you’re using a specific white balance or preset mode, the camera adjusts the color balance of the image according to those settings.

Gamma, contrast, and color adjustments

Suppose you expose a digital camera sensor to a light and it registers a brightness value of 50. If you expose the same camera to twice as much light, it will register a brightness value of 100. Although this makes perfect sense, it is, unfortunately, not the way your eyes work. Doubling the amount of light that hits your eyes does not result in a doubling of perceived brightness, because your eyes do not have a linear response to light.

Our eyes are much more sensitive to changes in illumination at very low levels and very high levels than they are to changes in moderate brightness. For this reason, we don’t perceive changes in light in the same linear way that a digital camera does.

To compensate for this, a camera applies a mathematical curve to the image data to make the brightness levels in the image match what the eye would see. This gamma correction process redistributes the tones in an image so that there’s more contrast at the extreme ends of the tonal spectrum. A gamma-corrected image has more subtle change in its darkest and lightest tones than does an uncorrected, linear image (Figure 4.5).

Figure 4.5

Figure 4.5 The left image shows what your camera captures—a linear image. The right image shows how your eyes perceive the same scene. Because your eyes have a nonlinear response to light, they perceive more contrast in the shadows and highlights.

Next, the camera adjusts the image’s contrast and color. These alterations are usually fairly straightforward—an increase in contrast, perhaps a boost to the saturation of the image. Most cameras let you adjust these parameters using the built-in menuing system, and most cameras provide some variation on the options shown in Figure 4.6.

Figure 4.6

Figure 4.6 On most cameras, internal image processing can be controlled through a simple menu.

Noise reduction and sharpening

Many cameras employ some form of noise reduction. Some cameras also automatically switch on an additional, more aggressive noise reduction process when you shoot using a lengthy exposure.

All cameras also perform some sharpening of their images. How much varies from camera to camera, and the sharpening amount can usually be adjusted by the user. This sharpening is employed partly to compensate for the blurring that is applied to even out color variations during the demosaicing stage (Figure 4.7).

Figure 4.7

Figure 4.7 This figure shows the same image sharpened two different ways. The image on the left has been reasonably sharpened, but the one on the right has been aggressively oversharpened, resulting in a picture with harsher contrast.

Eight-bit conversion

Your camera’s image sensor attempts to determine a numeric value that represents the amount of light that strikes a particular pixel. Most cameras capture 10 to 12 bits of data for each pixel on the sensor. In the binary counting scheme that all computers use, 12 bits allow you to count from 0 to 4095, which means that you can represent 4096 different shades that the sensor can capture, from the darkest to the lightest tones.

The JPEG format allows only 8-bit images. With 8 bits, you can count from 0 to 255. Obviously, converting to 8 bits from 12 means throwing out some data, which manifests in your final image as a loss of fine color transitions and details. Although this loss may not be perceptible (particularly if you’re outputting to a monitor or printer that’s not good enough to display those extra colors anyway), it can have a big impact on how far you can push your color corrections and adjustments.

JPEG compression

With your image data interpreted, corrected, and converted to 8-bit, it’s ready to be compressed for storage on your camera’s media card. JPEG compression exploits the fact that your eyes are more sensitive to changes in luminance than they are to changes in color.

With JPEG compression, your image will take up far less space, allowing you to fit more images onto your camera’s storage card. However, JPEG is a lossy compressor, which means your image loses image quality. How much depends on the level of JPEG compression that you choose to apply.

On most cameras, the best level of JPEG compression is indistinguishable from an uncompressed image. However, you must be very careful to not recompress a JPEG image. JPEG compressing an image that has already been through a JPEG compression process can increase the image quality loss to a level that is perceptible (Figure 4.8). You must keep this in mind through your JPEG workflow.

Figure 4.8

Figure 4.8 In this rather extreme example of JPEG compression, you can see the posterization effects that can occur from aggressive compression (or repeated recompression) as well as the blocky artifacts that JPEG compression can produce.

File storage

Finally, the image is written to your camera’s memory card. These days, many cameras include sophisticated memory buffers that let them process images in an onboard RAM cache while simultaneously writing out other images to the storage card. Good buffer and writing performance is what allows some cameras to achieve high frame rates when shooting in burst mode.

Your camera also stores important exchangeable image file (EXIF) information in the header of the JPEG file. The EXIF header contains, among other things, all of the relevant exposure information for your shot. Everything from camera make and model to shutter speed, aperture setting, ISO speed, white balance setting, exposure compensation setting, and much more is included in the file, and you can reference all of this information from within Capture NX (Figure 4.9).

Figure 4.9

Figure 4.9 Capture NX allows you to look at the EXIF data of any image that you’re browsing or editing.

How Your Camera Makes a Raw Image

It’s much easier for your camera to make a raw file than a JPEG file for the simple reason that when shooting raw, your camera passes the hard work on to you, or rather, to you and your computer. When shooting in raw mode, the camera does nothing more than collect the data from the image sensor, amplify it, and then write it to the storage card along with the usual EXIF information. In addition, certain camera settings such as white balance and any image processing settings are stored. It’s your job to take that raw file and run it through software to turn it into a usable image.

A raw converter, such as the one included in Capture NX, is just a stand-alone variation of the same type of software that’s built into your camera. With it, you move all of the computation that’s normally done in your camera onto your desktop, where you have more control and a few more options.

Why should you do it yourself when you can get your camera to do it for you? Let’s take a look at each image processing step to see why raw is a better way to go for many images.

Demosaicing

As mentioned earlier, the quality of a demosaicing algorithm often has a lot to do with the final image quality that you get from your camera. Demosaicing is a complex process, and a poorly demosaiced image can yield artifacts such as magenta or green fringes along high-contrast edges, poor edge detail, weird moiré patterns, or blue halos around specular highlights. Today, most cameras employ special circuits or digital signal processors designed specifically to handle the hundreds of calculations per pixel that are required to demosaic an image. Even though your camera may have a lot of processing power, it doesn’t always have a lot of processing time. If you’re shooting a burst of images at five frames per second, the camera can’t devote much time to its image processing. Therefore, because time is often short and the custom chips used for demosaicing are often battery hogs, in-camera demosaicing algorithms will sometimes take shortcuts for the sake of speed and battery power.

Your desktop computer can afford to take more time to crunch numbers. Consequently, most raw conversion applications use a more sophisticated demosaicing algorithm than what’s in your camera. This often results in better color and quality than your camera can provide when shooting in JPEG mode.

Colorimetric interpretation

Just as your camera does when shooting in JPEG mode, stand-alone raw converters must adjust the image to compensate for the specific color of the filters on your camera’s image sensor. This colorimetric information is usually contained in special camera profiles that are included with the raw software. If your software doesn’t provide a profile for your camera, it can’t process that camera’s raw files.

Color space

As explained earlier, your image data must be mapped to a color space when your camera shoots a JPEG file, and raw processing software on your computer must do the same thing. Your raw converter lets you specify which color space you want to use, and probably offers more choices than you’ll find in your camera. In addition to the sRGB and Adobe RGB color spaces, which your camera probably offers, most raw converters provide choices such as ProPhoto RGB.

White balance

Good white balance is a tricky process, both objectively and subjectively. Objectively, calculating accurate white balance in a difficult lighting situation—say, sunlight streaming into a room lit with fluorescent light—can trip up even a quality camera. When working with raw files, you can adjust the white balance as you see fit, which can often mean the difference between a usable and an unusable image.

Subjectively, accurate white balance is not always the best white balance. Although you may have set the white balance accurately and achieved extremely precise color, an image that’s a little warmer or cooler may actually be more pleasing. Altering the white balance of the image in your raw converter is often the best way to make such a change.

Even though your camera doesn’t perform any white balance adjustments when shooting in raw mode, it does store the white balance setting in the image’s EXIF information. Your raw converter, in turn, reads this setting and uses it as the initial white balance setting. You can start adjusting the white balance from there.

Gamma, contrast, and color adjustments

As discussed earlier, when shooting JPEG, your camera performs a gamma adjustment to the image so that the brightness values are closer to what our eyes see when they look at a scene. When shooting JPEG, you can’t control the amount of gamma correction that your camera applies, but when processing a raw image you have more control, as you can easily adjust and tweak the gamma correction.

Similarly, although your camera may provide contrast, saturation, and brightness controls for JPEG shooting, these controls probably offer only three to five different settings. Raw converters provide controls with tremendous range and far more settings.

What’s more, when you’re working with raw files, Capture NX provides highlight recovery tools that allow you to perform corrections that simply aren’t possible when you work with non-raw formats.

Noise reduction and sharpening

Just as your camera performs noise reduction and sharpening, your raw converter also includes these capabilities. However, like the gamma, contrast, and color adjustment controls, the noise reduction and sharpening controls of a raw converter provide much finer degrees of precision.

Eight-bit conversion

Although your camera must convert its 12-bit color data to 8 bits before storing it in JPEG format (because the JPEG format doesn’t support higher bit depths), raw files aren’t as limited. With your raw converter, you can choose to compress the image down to 8 bits or to write it as a 16-bit file. Sixteen bits provides more number space than you need for a 12-bit file, but the extra space means that you don’t have to toss out any of your color information. The ability to output 16-bit files is one of the main quality advantages of shooting in raw.

File storage

With a raw file, you can choose to save your final edited image in any of several formats. Obviously, you’re free to save your image as an 8-bit JPEG file, just as if you had originally shot in JPEG mode, but most likely you’ll want to save in a non-compressed format to preserve as much image quality as possible. And if you decide to work in 16-bit mode, you’ll want to choose a format that can handle 16-bit images, such as TIFF.

One of the great advantages of shooting in raw mode is that you can process your original image in many different ways and save it in various formats. Thus, there might be times when you do want to save your raw files in JPEG format—as part of a Web production workflow, for instance. This doesn’t mean that you’ve actually compromised any of your precious raw data, since you can always go back to your original raw file, process it again, and write it to a different format.

What’s more, as raw conversion software improves, you can return to your original raw files, reprocess them, and possibly produce better results. Raw files truly represent the digital equivalent of a negative.

Raw Concerns

When compared side by side, it’s easy to see that shooting raw offers a number of important advantages over shooting JPEG: editable white balance, 16-bit color support, and no compression artifacts. But as you’ve probably already suspected, there is a price to pay for this additional power.

Storage

Raw files are big. Whereas a fine-quality JPEG file from a 10-megapixel camera might weigh in at 3 MB, the same image shot in raw will devour 7–9 MB of storage space. Fortunately, this is not as grievous an issue as it used to be thanks to falling memory prices.

Workflow

One of the great advantages of digital photography is that it makes shooting lots of pictures very inexpensive. Thus, you can much more freely experiment, bracket your shots, and shoot photos of subjects that you normally might not want to risk a frame of film on. Consequently, it’s very easy to quickly start drowning in images, all of which have unhelpful names like DSC0549.JPG.

Managing image glut is pretty easy with JPEG files since there are so many programs that can read them. In fact, both the Mac OS and Windows include JPEG readers.

With raw files, your workflow is a bit more complicated because a raw file doesn’t contain any comprehensible image data until it has been processed: It’s a little more difficult to quickly glance through a collection of raw images. Fortunately, both the Mac OS and Windows provide simple image viewers that let you look at raw files. These programs work by performing a simple automatic raw conversion, just like your camera does when shooting in JPEG mode.

However, because there’s no set standard for the formatting of raw files, different vendors encode their raw data in various ways. Also, as you learned earlier, to perform a raw conversion, your raw conversion software has to determine certain things about the type of camera that was used to shoot the image, such as the colorimetric data about the colored filters on the camera’s image sensor.

For these reasons, your raw conversion software must be outfitted with a special profile that describes the type of camera that produced the files that you want to convert. Unfortunately, these profiles have to be created by the software manufacturer. If your raw converter of choice doesn’t support your specific camera model, you’ll be unable to view and process raw files in that program.

Capture NX can process raw files from any Nikon camera that outputs NEF (Nikon Electronic Format) files. NEF files can contain raw camera data as well as non-raw data, just like a TIFF or JPEG file.

If you’re shooting with a different type of camera, you won’t be able to process your raw files in Capture NX. You’ll need to process them using a different converter, save them as TIFF or JPEG files, and then bring the results into Capture NX when you want to edit them. We’ll cover this process in more detail in the next chapter.

Shooting performance

Because raw files are so big, the camera requires more time and memory to buffer and store them. Depending on the quality of your camera, choosing to shoot in raw mode may compromise your ability to capture bursts of images at a fast frame rate. In addition, your camera may take longer to recover from a burst of shooting, because its buffer will fill up quickly and take more time to flush.

If you want speedy burst shooting and the ability to shoot raw, you might have to consider upgrading to a camera with better burst performance.

Raw Advantages

Shooting raw will not get you images with a wider range of colors or tones, just many more gradations of tone and color within the range captured, or more sharpness and detail. The advantage of raw has to do with when you start to edit. If you never perform any image editing, then there’s little reason to shoot raw. Learn to use the adjustment controls in your camera, and keep shooting JPEG.

But, if you like to perform the types of edits that we saw in the last chapter, then there are huge advantages to shooting in raw. In simplest terms, raw files score over JPEG files when editing because:

  • With raw files, you can adjust white balance after you shoot, with far greater latitude and accuracy than you can with JPEG images. In addition, there’s no way to introduce artifacts or problems into your edit when using a raw converter to adjust white balance.
  • Raw files often allow you to recover highlights that have been overexposed, allowing you to restore detail to areas that initially appear completely white.
  • Because of their greater bit depth, raw files allow you to perform more edits before you begin to see visible degradation in your image.
  • Since they aren’t JPEG compressed, raw files never suffer from all of the visible artifacts introduced by JPEG compression.
  • Because you’re in control of the raw conversion that’s normally performed inside your camera, you can tweak the conversion settings to your particular tastes.
  • A raw file is a true digital negative. Years from now, if a much-improved raw conversion technology becomes available, you can re-process your raw files and possibly get better results.
  • Finally, raw allows more sophisticated workflows that make it simple to apply the same edits to multiple images, and to process the same raw file in many different ways to get several different results.

In the rest of this chapter, we’ll look at all of the raw editing controls that you’ll find in Capture NX.

  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus