Processing: What is Banding? Part 1

“If we had a hard time, my mother would sit me down and we would talk about it, and she kept talking and kept processing until we started to laugh about it.”
| Liza Minnelli

This is my first blog post opening a new upcoming series on processing photographs and challenges that you may face. What I hope I can bring and contribute to the photographer’s world is a combination of important, but compressed scientific background information on the problems, why they emerge with hands-on practices of how to deal with them. This week I’d like to focus on the problem of banding emerging through processing an image file, explain what it is, where it comes from and what alternative procedures you can apply to get rid of it or mitigate it’s effect. This article is however not about banding that has recently been related to new sensors and electronic shutter challenges.

This focal article is part 1 of a series of 2. This week I focus on understanding banding in photography. Next week, I’ll look into how to mitigate or get rid of the banding problem.

What is Banding?

Banding belongs to a group of failures during on editing process of an image that often passes unrecognized. Similar to halos, strong micro-contrasts, or dust spots they are often identified not before the photograph is already published or is already printed. It especially happens to the untrained eyes. The difficulties to discover it in your image are based on the circumstances that banding often emerges after compressing a file to standard 8-bit JPEG. Further, banding might not be recognized up until you look into specific color channels.

The effect of banding appears when the transition between tones are not smooth and is enhanced with almost every additional adjustment. In areas with a lot of detail, we will not be able to recognize banding. Banding only emerges in seamless areas with similar shades of the same color. This often creates patterns of vertical or horizontal lines. These lines are mostly visible on backgrounds which are uniform or monochrome in colors.

There is a higher probability the bands emerge on dark colors. Why is this the case?

To understand this principle from quantum physics, we have to accept that light in itself doesn’t contain any color. Digital photography is all about transferring light energy into a digital piece of information which we then process. Light waves carry energy. We appoint the energy by their frequency. The small packaged carriers of energy are called photons. The number of photons is proportional to the amount of energy. This energy is capable to interact with objects and as a result color can be perceived.

Image 1 shows that with increasing frequency the wavelength decreases, but the higher the photon energy is.

Image 1: Light Wavelength and Energy Frequency. Image Source:

How do our eyes work to translate energy frequencies into color?

Out of all existing energy frequencies, only a very small spectrum is visible for our human eyes. The light photons then interact with the rods and cones on our retina (rods detect the presence of light; cones detect different wavelengths of light, allowing our brains to interpret color) and release energy as free electrons. The consequence of this process is an electrical impulse that communicates with our brain and is able to create our vision. Our brain then interprets specific colors based on specific frequencies as shown in Image 1. This interpretation is far more complicated than described in here, as the detection and interpretation of the human eye depends also on

  • genetics (some rare people having 4-color eyes resulting in far more visible colors; others have a 1- or 2-color vision, the color blind people, who see far less);
  • the color trigger (our human eye is able to detect more shades of blue and green than of red and yellow. This is mainly because the key light sensing chemical in our eyes comes in blue and green wavelengths);
  • the color characteristics: brightness, contrast of saturation, contrast of hue, or color similarity.

How does a camera sensor translate energy frequencies into color?

This happens in a very similar way than with our eyes. When light photons interact with an image sensor (instead with our human eyes) free electrons are also released. Depending on the photon energy, a different number of electrons is released.

How does more light translate into information used in a photograph?

The more light we have, the more energy is transported in that light. Thus, more photons release more free electrons. More electrons produce a larger electronic charge and increase the signal. As such, we have more information in a photograph. This is the main explanation and justification for expose to the right (ETTR). If we burn the highlights in a photograph, we loose much more information than we loose when burning the shadows. As a consequence, burned shadows can often be partly restored while highlight can’t.

However, it also answers our introductory question, why banding appears more often in the dark colors. We have less light in the dark colors, thus, less photo information. As such, dark colors are more prone to banding than bright colors, because there is not enough information to smoothly transition in between the colors. While we might perceive a scene such as a sky as one unique color, it is often the result of many minor deviations in color. If a change between of one of these subtle colors to another one are undetectable by our camera or computer, smooth transitions in between them cannot be created and we recognize perceivable steps in between them – what we see as banding.

Why does banding emerge in processing?

The reasons for bending to emerge in an image in processing are manifold. Basically, these problems are caused by over-editing, for example through using gradients and curve/levels adjustment layers, or by compressing the image file, for example when compressing the original 12-, 14- or even 16-bit image file into a standard 8-bit JPEG file. Through this compression, we loose information and banding may appear as a result.

Compression is not only caused by the bit format, but also by any other compression technique in editing processes. For example, if noise reduction is heavily applied to an image, information is deleted which may cause banding as well. Applying a gradient after noise reduction has been reduced in an image file, often leads to banding problems. For this reason, one can often read the recommendation to apply noise reduction or sharpening as the last final steps in editing an image.

How to avoid banding?

In the sections before, we have seen that banding is conditional on the amount of colors captured in a scene and replicated on the computer. Avoiding banding therefore means that we strive for maximizing the number of available colors in our photograph. How can we maximize them?

First, we should always use the highest amount of bit depths in our camera. As we know, most of the standard JPEGs are made in 8-bit (JPEG-2000 uses 16-bit as do the RAW files), i.e. we have an available set of 256 (= 2^8) shades of red, green and blue creating an available set of roughly 16.8 million colors. While humans can see exactly three colors (red, green and blue -> that’s why we’ve created the RGB color mode), our brain is capable to represent on average of about 8 and 10 million different colors. If we are not capable to differentiate between more colors, why do we need a higher bit depth? Why is 8-bit not sufficient? When doing the math, 16-bit (65’536 = 2^16 shades of red, green and blue) corresponds to about 281 trillions different colors. Why do we want 14-, 16- (like for example in the latest Sony chips) or even 32-bit? To demonstrate it, I created two images in Photoshop, one as an 8-bit (Image 2a), the second one as a 16-bit (Image 2b)image file. In both images I created an identical black-to-white gradient layer in Photoshop. On the monitor screen, both look identical.

Image 2a: Black-to-White Gradient as an 8-bit Image File

Image 2b: Black-to-White Gradient as a 16-bit Image File

However, when we starting editing the image files, differences will appear. To both files, as Image 3 shows I applied two curves adjustment layers. The central idea is to use the first layer to squeeze the whole tonal range into a much smaller area around its centre. In the first adjustment the whole input level from pure black to pure white (input 0,255) is squeezed into an output area that ranges from (120,140). The adjustments are illustrated in Images 4a and 4b.

Image 3: Adjustment Layers Applied to Both Bit Image Files

Image 4a: Curve-1 Adjustment with Input (0,255)

Image 4b: Curve-1 Adjustment with Output (120,140)

The resulting images look almost identical and show a quite stable gray representation of the gradient

Image 5a: Black-to-White Gradient After Curve-1 Adjustment as an 8-bit Image File

Image 5b: Black-to-White Gradient After Curve-1 Adjustment as an 16-bit Image File

The idea of the second adjustment layer is conditional on the first adjustment (i.e. we leave the first layer visible in Photoshop). For this, we are using the created small area range (120,140) as input and stretch them back into the full tonal range (0,255). The adjustments are illustrated in Images 6a and 6b.

Image 6a: Curve-2 Adjustment with Input (120,140)

Image 6b: Curve-2 Adjustment with Output (0,255)

The resulting image files now look much different although the input files in the 8-bit and the 16-bit versions looked almost identical. The final outcomes of the 8-bit and the 16-bit files after the two curves adjustments are shown in Images 7a and 7b.

Image 7a: Black-to-White Gradient After The Two Curve Adjustments as an 8-bit Image File

Image 7b: Black-to-White Gradient After The Two Curve Adjustments as an 16-bit Image File

What we can observe through this small exercise is that we created a banding effect in the 8-bit image file just by squeezing the whole tonal range of the gradient into a small area around the center and afterwards enlarging it again back towards the full tonal range. Through these adjustments we unfortunately lost a lot of detail information within the image file. Because the 8-bit gray gradient image file only has 256 tonal ranges, it wasn’t able to replicate the gradient anymore. However, the 16-bit gray gradient image file with its 65’536 tonal ranges was apple to restore the gradient. While the 16-bit file also lost information, we have much more flexibility in editing and will not discover the banding problems immediately. Basically, 16-bit files offer a much higher range of flexibility in editing an image.

Another way to give support to this argument is looking at the Zone System, originally formulated by Ansel Adams and Fred Archer, and later on further modified by Minor White. To move from the black to the white tonal range Adams and Archer presented eleven different zones, each one representing one stop of light difference. Zone 0 represents pure black, Zone 10 pure white. The system shows (as in line with my argument from above) that there is much more information in the bright tones than in the shadows. Thus, in an 8-bit image Zone 0 (pure black) has only 2 colors per channel resulting in 8 total colors, and Zone 1 with 4 colors per channel resulting in 64 total colors. In contrary, a 16-bit image has 512 colors per channel on Zone 0 resulting in roughly 134 million total colors and Zone 1 has 1’024 colors per channel resulting in roughly 1 billion total colors already. Image 8 shows the colors per channel and the number of total colors in an 8-bit compared to a 16-bit image file.

Image 8: Colors Per Channel And Total Colors in an 8-Bit Compared to a 16-Bit Image File Based on the Zone System


One consequence, of this whole first section is another motivation to always use the 16-bit RAW files over standard 8-bit JPEGs.

Second, using the lowest ISO possible reduces the amount of noise in an image file which may foster the banding problem.

Third, bracketing helps when you are confronted with scenes in which one color or dark colors dominate. As a rule of thumb, you can always capture three different exposures at -1, 0, +1 when carefully exposed to the right. Together, the three exposures could give more raw material or image information to cope with the banding problem. In the same line of argumentation, grad ND filters may also help. However, it is a rare event that banding already emerges in the camera when capturing in the highest bit format possible.

Fourth, a careful and thoughtful editing process is required. This process may start with having the final outlet in mind. Images processed for Instagram, a personal website or for print for edited and processed in a completely different way, because the level of details plays a more important role when moving from the first to the last outlet. Especially the combinations of noise reduction or luminosity mask processing with curves/levels adjustments on gradients afterwards have a high likelihood for banding to emerge.

Next week, I’ll focus on mitigating the effect of banding or get even rid of it. Stay tuned.



  1. Adams, Ansel. 1948. The Negative: Exposure and Development. Ansel Adams Basic Photography Series/Book 2. Boston: New York Graphic Society.
  2. Adams, Ansel. 1981. The Negative. The New Ansel Adams Basic Photography Series/Book 2. ed. Robert Baker. Boston: New York Graphic Society.
  3. White, Minor, Richard Zakia, and Peter Lorenz. 1976. The New Zone System Manual. Dobbs Ferry, N.Y.: Morgan & Morgan.
  4. Image 1: Light as a Wave, in: ScratchAPixel’s course on 3D Basic Lessons, 22.12.2015.