Color Correction in Nuke
Wow. This is a bit naive, calling a lesson “Color Correction.” It should be a whole course on its own. But this book is about more than that, and limited space reduces color correction to a single chapter. So let me start by explaining what it means.
Color correction refers to any change to the perceived color of an image. Making an image lighter, more saturated, changing the contrast, making it bluer—all of this is color correction. You can make an image look different as the result of a stylistic decision. But you can also color correct to combine two images so they feel like part of the same scene. This task is performed often in compositing when the foreground and the background need to have colors that work well together. However, there are plenty more reasons you may change the color of an image. An image might be a mask or an alpha channel that needs to have a different color—to lose softness and give it more contrast, for example.
Whatever reason you have for color correcting an image, the process will work according to the way Nuke handles color. Nuke is a very advanced system that uses cutting-edge technology and theory to work with color. Therefore, it is important to understand Nuke’s approach to color so you understand color correcting within Nuke.
Understanding Nuke’s Approach to Color
Nuke is a 32-bit float linear color compositing application. This is a bit of a fancy description, with potentially new words. I explain this bit by bit.
- 32 bit: That’s the number of bits used to hold colors. Most compositing and image-manipulation programs are 8-bit, allowing for 256 variations of color per channel (resulting in what’s referred to as “millions of colors” when the three color channels are combined). 8-bit is normally fine for displaying color but is not good enough for some calculations of operations and may produce unwanted results such as banding—an inaccurate display of gradients where changes in color happen abruptly instead of smoothly. 32-bit allows for a whopping 4,294,967,296 variations of color per channel. That’s a staggering number that results in much more accurate display of images and calculations of operations. 8- or 16-bit images brought into Nuke will be bumped up to 32-bit, although that doesn’t add any detail, it just enables better calculations from that point onward.
- Float: Normally the color of an image is represented between black and white. In 8-bit images, for example, the 256 color variations are split evenly between black and white—so the value 1 is black, the value 256 is white, and the value 128 is a middle gray. But what about colors that are brighter than white? Surely the whiteness in the middle of a lit lightbulb is brighter than a white piece of paper? For that reason, colors that are brighter than white are called super-whites. Also, colors that are darker than black are called sub-blacks (but I can’t think of a real-world analogy that I can use here short of black holes). Using 8 bits to describe an image simply doesn’t allow enough room to describe colors beyond black and white. These colors get clipped and are simply represented as black or white. However, in 32-bit color, there is plenty of room and these colors become representable. As mentioned before, 8-bit color is normally enough to display images on screen. Furthermore, the computer monitor can still display only white—and nothing brighter. However, it is still very important to have access to those colors beyond white, especially when you’re color correcting. When you’re darkening an image that has both a piece of white paper and a lightbulb in it, you can leave the lightbulb white, while darkening the paper to a darker gray color, resulting in an image that mimics real-world behavior and looks good and believable. Doing the same with a nonfloating image results in the white paper and the lightbulb appearing the same gray color—which is unconvincing.
- Linear: Linear can mean lots of things, but here, in terms of color, I mean linear color space. A computer monitor doesn’t show an image as the image appears in reality, because the monitor is not a linear display device. It has a mathematical curve called gamma that it uses to display images. Different monitors can have different curves, but most often, they have a gamma curve called sRGB. Because the monitor is not showing the image as it appears in reality, images need to be “corrected” for this. This is usually done automatically because most image capture devices are applying an sRGB curve too, in the opposite direction. Displaying a middle gray pixel on a monitor shows you only middle gray as it’s being affected by the gamma curve. Because your scanner, camera, and image processing applications all know this, they color correct by applying the reverse gamma curve on this gray pixel that negates the monitor’s effect. This process represents basic color management. However, if your image’s middle gray value isn’t middle gray because a gamma curve has been applied to it, it will react differently to color correction and might produce odd results. Most applications work in this way, and most people dealing with color have become accustomed to this. This is primarily because computer graphics is a relatively new industry that relies on computers that, until recently, were very slow. The correct way to manipulate imagery—in whatever way—is before the gamma curve has been applied to an image. The correct way is to take a linear image, color correct it, composite it, transform it, and then apply a reverse gamma curve to the image to view it correctly (as the monitor is applying gamma correction as well and negating the correction you just applied). Luckily, this is how Nuke works by default.
Still confused? Here’s a recap: Nuke creates very accurate representations of color and can store colors that are brighter than white and darker than black. It also calculates all the compositing operations in linear color space, resulting in more realistic and more mathematically correct results.
Nuke has many color correction nodes, but they are all built out of basic mathematical building blocks, which are the same in every software application. The next section looks at those building blocks.