Publishers of technology books, eBooks, and videos for creative people

Home > Articles > Digital Photography

  • Print
  • + Share This
This chapter is from the book

What Is Chroma?

Once the illuminant within a scene has bounced off a subject and has been captured by the optical/digital components of a camera, the reflected color information is stored via the chroma component of video. Chroma is that portion of an analog or digital video signal that carries color information, and in many video applications it can be adjusted independently of the luma of the image. In component Y’CBCR-encoded video, the chroma is carried in the Cb and Cr color difference channels of the video signal.

This scheme was originally devised to ensure backward compatibility between color and monochrome television sets (back when there were such things as monochrome television sets). Monochrome TVs were able to filter out the chroma component, displaying the luma component by itself. However, this scheme of color encoding also proved valuable for video signal compression, since the chroma component can be subsampled for consumer video formats, lowering the quality in a virtually imperceptible way, while shrinking the bandwidth necessary for storing initially analog, and later digital files, allowing more video to be recorded using less storage media.

The color of any recorded subject with an encoded chroma component has two characteristics: hue and saturation.

What Is Hue?

Hue simply describes the wavelength of the color, whether it’s red (a long wavelength), green (a medium wavelength that’s shorter than red), or blue (the shortest visible wavelength of all). Each color we consider to be unique from any other (orange, cyan, purple) is a different hue.

Hue is represented on any color wheel as an angle about the center (Figure 4.9).

Figure 4.9

Figure 4.9 How hue is represented by a color wheel.

When hue is assigned a control in a color correction application, it’s typically as a slider or parameter in degrees. Increasing or decreasing the degree of hue shifts the colors of the entire image in the direction of adjustment.

What Is Saturation?

Saturation describes the intensity of a color, such as whether it’s a vivid or deep blue or a pale and pastel blue. A desaturated image has no color at all—it’s a grayscale, monochrome image.

Saturation is also represented on the color wheel used in onscreen color correction interfaces in some applications, seen as completely desaturated (0 percent) at the center of the wheel and completely saturated (100 percent) at the wheel’s edge (Figure 4.10).

Figure 4.10

Figure 4.10 This shows 100 percent and 0 percent saturation on a standard color wheel, corresponding to the saturated and desaturated regions of a vectorscope.

Increasing saturation intensifies the colors of an image. Decreasing saturation reduces the vividness of colors in an image, making it paler and paler until all color disappears, leaving only the monochrome luma component.

Primary Colors

Video uses an additive color system, wherein red, green, and blue are the three primary colors that, added together in different proportions, are able to produce any other color that’s reproducible on a particular display (Figure 4.11).

Figure 4.11

Figure 4.11 Three primary colors combining. Any two result in a secondary color; all three produce pure white.

Red, green, and blue are the three purest colors that a display can represent, by setting a single color channel to 100 percent and the other two color channels to 0 percent. Adding 100 percent of red, green, and blue results in white, while 0 percent of red, green, and blue results in black.

Interestingly, this scheme matches our visual system’s sensitivities. As mentioned previously, our sensitivity to color comes from approximately 5 million cone cells found within our retinas, distributed into three types of cells:

  • Red-sensitive (long-wavelength, also called L cells)
  • Green-sensitive (medium-wavelength, or M cells)
  • Blue-sensitive (short-wavelength, or S cells)

The relative distribution of these is 40:20:1, with our lowest sensitivity corresponding to blue (the chief penalty of which is limited sharpness perception for predominantly blue scenes).

These are arranged in various combinations that, as we’ll see later, convey different color encodings to the image-processing part of our brains, depending on what proportions of each type of cone receive stimulus.

You may have noticed that some stage lighting fixtures (and increasingly, LED-based lighting panels for the film and video industry) consist of clusters of red, green, and blue lights. When all three lights are turned on, our naked eyes see a bright, clear white. Similarly, the red, green, and blue components within each physical pixel of a video or computer display combine as white to our eyes when all three channels are at 100 percent.

RGB Channel Levels for Monochrome Images

Another important ramification of the additive color model is that identical levels in all three color channels, no matter what the actual amounts are, result in a neutral gray image. For example, the monochrome image in Figure 4.12 is shown side by side with an RGB parade scope displaying each color channel. Because there’s no color, every channel is exactly equal to the others.

Figure 4.12

Figure 4.12 The three color channels of a completely monochrome image are equal.

Because of this, spotting improper color using an RGB or YRGB parade scope is easy, assuming you’re able to spot a feature that’s supposed to be completely desaturated or gray. If the gray feature does not have three perfectly equal waveforms in the RGB parade scope, then there’s a tint to the image.

For example, the white pillar in the image corresponds to the leftmost high spikes in the red, green, and blue waveforms of the parade scope (Figure 4.13). Since they’re nearly equal (actually, there’s a bit of a blue cast, but that makes sense since they’re outside in daylight), we can conclude that the highlights of the image are fairly neutral.

Figure 4.13

Figure 4.13 Even though the red waveform is generally strongest and the blue waveform is weakest, the close alignment of the tops and bottom of the waveforms lets us know that the highlights and shadows are fairly neutral.

Secondary Colors

Secondary colors are the combination of any two color channels at 100 percent, with the third at 0 percent:

  • Red + green = yellow
  • Green + blue = cyan
  • Blue + red = magenta

Because the primary and secondary colors are the easiest colors to mathematically create using the RGB additive color model, they are used to comprise the different bars of the standard color bars test pattern used to calibrate different video equipment (Figure 4.14).

Figure 4.14

Figure 4.14 Full-frame color bars used by the test pattern common for PAL video. Each bar of this test pattern corresponds to a color target on a standard vectorscope graticule.

As discussed later in “Using the Vectorscope,” each bar corresponds to a color target on the vectorscope graticule. These color targets provide a much-needed frame of reference, showing which traces of a vectorscope graph correspond to which colors.

Complementary Colors

There’s one more aspect of the additive color model that’s crucial to understanding how nearly every color adjustment we make works: the way that complementary colors neutralize one another.

Simply put, complementary colors are any two colors that sit directly opposite one another on the color wheel, as shown in Figure 4.15.

Figure 4.15

Figure 4.15 Two complementary colors sit directly opposite one another on the color wheel.

Whenever two perfectly complementary colors are combined, the result is complete desaturation. As the hues fall off to either angle of being complementary, this cancelling effect also falls off, until the hues are far enough apart for the colors to simply combine in another additive way (Figure 4.16).

Figure 4.16

Figure 4.16 Where the hues are perfectly complementary to one another, the colors are completely cancelled out. As the angle of hue falls off from being complementary, so does this desaturating effect.

To understand why this works, it’s useful to delve deeper into the mechanics of human vision. As discussed in Margaret Livingstone’s Vision and Art: The Biology of Seeing (Harry N. Abrams, 2008), the dominant theory for how bipolar and M retinal ganglion nerve cells encode color information for processing in the thalamus of the brain is the color-opponent model.

The cones described earlier connect in groups to bipolar cells that compare the cone inputs to one another. For example, in one type of bipolar cell, (L)ong-wavelength (red-sensitive) cone inputs inhibit the nerve, while (M)edium-wavelength (green-sensitive) and (S)hort-wavelength (blue-sensitive) cone inputs excite it (Figure 4.17). In other words, for that cell, each red input is a positive influence, and each green or blue input is a negative influence.

Figure 4.17

Figure 4.17 This is an approximation of opponent model cell organization. Groups of cone cells are organized so that multiple cell inputs influence the retinal ganglion cells, which encode cell stimulus for further processing by the brain. Some cells excite (+) the ganglion, while other cells inhibit (–) the ganglion. Thus, all color signals are based on a comparison of colors within the scene.

In Maureen C. Stone’s A Field Guide to Digital Color (A K Peters, 2003), the first level of encoding for this color-opponent model is described as conveying three signals corresponding to three different cone combinations:

  • Luminance = L-cones + M-cones + S-cones
  • Red – Green = L-cones – M-cones + S-cones
  • Yellow – Blue = L-cones + M-cones – S-cones

Color-opponent cells, in turn, connect to double-opponent cells, which further refine the comparative color encoding that’s used to pass information on to the thalamus, the vision-processing region of our brains.

Two important byproducts of double-opponency are the cancellation of complementary colors discussed previously and the effect of simultaneous color contrast, where gray patches are seen to assume the complementary hue of a dominant surround color (Figure 4.18).

Figure 4.18

Figure 4.18 The gray patches at the center of each colored square appear as if they’re tinted with the complementary color of each surrounding area of color. The patch inside the green square appears reddish, and the patch inside the red square appears greenish. This effect becomes more pronounced the longer you look at one or the other of the squares.

Perhaps the simplest way of summing up the opponent model of vision is that cone cells don’t output specific wavelength information—they simply indicate whether long-, medium-, or short-wavelength light is present, according to each cell’s sensitivities. It’s the comparison of multiple combinations of triggered and untriggered cone cells that our visual system and brain interpret as various colors in a scene.

In short, we evaluate the color of a subject relative to the other colors surrounding it. The benefit of this method of seeing is that it makes us capable of distinguishing the unique color of an object regardless of the color temperature of the dominant light source. An orange still looks orange whether we’re holding it outside in daylight or inside by the light of a 40-watt bulb, even though both light sources output dramatically different wavelengths of light that interact with the pigments of the orange’s skin.

We’ll see later how to use complementary color to adjust images and neutralize unwanted color casts in a scene.

Color Models and Color Spaces

A color model is a specific mathematical method of defining colors using a specific set of variables. A color space is effectively a predefined range of colors (or gamut) that exists within a particular color model. For example, RGB is a color model. sRGB is a color space that defines a gamut within the RGB color model.

The print standard of CMYK is a color model, as is the CIE XYZ method of representing color in three dimensions that’s often used to represent the overall gamut of colors that can be reproduced on a particular display.

There are even more esoteric color models, such as the IPT color model, a perceptually weighted color model designed to represent a more uniform distribution of values that accounts for our eyes’ diminished sensitivity to various hues.

Color Models in 3D

Another interesting thing about color models is that you can use them to visualize a range of color via a three-dimensional shape. Each color model, when extruded into three dimensions, assumes a different shape. For example, a good pair of color model extrusions to compare is RGB and HSL:

  • The RGB color model appears as a cube, with black and white at two opposite diagonal corners of the cube (the center of the diagonal being the desaturated range of neutral black to white). The three primary colors—red, green, and blue—lie at the three corners that are connected to black, while the three secondary colors—yellow, cyan, and magenta—lie at the three corners connected to white (Figure 4.19, left).

    Figure 4.19

    Figure 4.19 Three-dimensional RGB and HSL color space models compared.

  • The HSL color model appears as a two-pointed cone, with black and white at the top and bottom opposite points. The 100 percent saturated primary and secondary colors are distributed around the outside of the middle, fattest part of this shape. The center line of the shape connecting the black and white points is the desaturated range of gray (Figure 4.19, right).

These color models sometimes appear as the representation of a range of color in a video analysis tool, such as the 3D Histogram in Autodesk Smoke (Figure 4.20). Three-dimensional color space representations also appear in the onscreen interfaces of applications that use 3D keyers.

Figure 4.20

Figure 4.20 A three-dimensional Y’CBCR graph found in Autodesk Smoke. The value of each pixel in the image is extruded into 3D space according to the Y’CBCR color model.

Outside of the practical use of 3D color space shapes in application interfaces, these representations also are useful in giving us a framework for visualizing ranges of color and contrast in different ways.

RGB vs. Y’CBCR Color Models

In general, the digital media you’ll be color correcting will be delivered as either RGB- or Y’CBCR-encoded files. Consequently, color correction applications all work with both RGB and Y’CBCR color models. Components of each can be mathematically converted into those corresponding to the other, which is why even though you may be working with Y’CBCR source media shot using video equipment, you can examine the data using RGB parade scopes and make adjustments using RGB curves and RGB lift/gamma/gain parameters.

Similarly, RGB source media ingested via a film scanner or captured using a digital cinema camera can be examined using the Y’CBCR analysis of Waveform Monitors and vectorscopes and adjusted using the same luma and color balance controls that have been traditionally used for video color correction.

Converting one color space into the other is a mathematical exercise. For example, to convert RGB components into Y’CBCR components, you’d use the following general math:

  • Y’ (for BT.709 video) = (0.2126 × R’) + (0.7152 × G’) + (0.0722 B’)
  • Cb = B’ – L’
  • Cr = R’ – L’

The HSL (HSB) Color Model

HSL stands for Hue, Saturation, and Luminance. It’s also referred to sometimes as HSB (Hue, Saturation, and Black). HSL is a color model, a way of representing and describing color using discrete values.

Even though digital media is not actually encoded using HSL, it’s an important color model to understand because it appears within the onscreen interfaces of numerous color correction and compositing applications. HSL is convenient because the three parameters—hue, saturation, and luminance—are easily understood and manipulated without the need for mind-bending math.

For example, if you had the R, G, and B controls shown in Figure 4.21, how would you change a color from greenish to bluish?

Figure 4.21

Figure 4.21 RGB Gamma, Pedestal, and Gain controls in an Adobe After Effects filter.

If you instead examined a set of H, S, and L sliders, it’s probably a lot more obvious that the thing to do is manipulate the H(ue) dial. To provide a more concrete example, Figure 4.22 shows the HSL qualification controls used to isolate a range of color and contrast for targeted correction.

Figure 4.22

Figure 4.22 HSL controls found in the Adobe After Effects hue/saturation filter.

Once you understand the HSL color model, the purpose of each control in Figure 4.22 should at least suggest itself to you, even if you don’t immediately understand the details.

  • + Share This
  • 🔖 Save To Your Account