American White PelicansPurple PetuniasSandhill CraneHorse's EyeAmerican BisonSparring American ElksAurora Borealis

How Do Digital Sensors Work?
by Bob Dean

Most of us have made the shift from film to digital over the past years. When we were shooting film a number of us also experienced the sights and smells of the darkroom and so we had a pretty good idea of how film worked. Light interacted with photosensitive chemicals in the film emulsion and during the developing process other chemicals stabilized the transformed images on a negative or transparency. Digital technology is not all that much different. As I have stated many times before in this column and well as in my classes, the only difference between film and digital image making is the medium on which theimage is captured. Let’s talk digital!

In the digital sensor world, photosensitive has a different meaning that in the film world. Digital sensors are electronic parts (integrated circuits or IC’s) that have a physical structure that allows incoming light to generate electric signals. By the way, IC’s are typically referred to as chips in the industry, so I’ll be mixing terms. These signals are conducted away from the sensor site (the picture element or pixel) by very tiny wires that are part of the IC. The wires take the signal to amplifiers (on the same chip). The amplifiers boost the very tiny signal to a level that can be manipulated and digitized by yet more circuits. The output of the digitizing circuit is a light level, period. This is because the individual sensor sites on a digital sensor are monochromatic; they only see light in terms of intensity not color. So why are all digital cameras only black and white?

The clever design engineers who develop sensor technology also have a pretty good understanding of the human eye and how we perceive color. This is actually a carryover from the color film technology where the designers used color sensitive layers in the emulsion.

The basic sensor is a grid pattern of structures that convert light energy (photons) into electrical energy (electrons) in a way that is not all that different from solar cells. The ability to add color to the image is done by filtering the light before it strikes the sensor. Our old friends red, green and blue (RGB) are at work here. The light is filtered to allow those colors to strike specific sensors and when final signal is digitized into a light intensity level, the tiny little computer chip in the camera can correlate that intensity to a color, and thus adds color to the data file for that sensor site. If you were to magnify the filter structure on a camera, you would find about 25% of the sites detect red, 25% blue and 50% green. This ratio was established to allow the sensor to more closely match the response of the human eye and thus make further “post processing” easier.

After all of the sites have been scanned for light intensity and the camera settings have been added, the data is ready to be stored as a RAW image. As a side thought, a 10 megapixel sensor has about 10 million sites, imagine how fast that little computer is working if you can shoot about 8 images a second.