March 19, 2008

How a digital camera sensor works

WWW.cambridgeincolor.com has some of the best articles I have read that clearly explain the science and the techniques of creating digital images. Topics cover everything from light striking your camera's sensor to the perception of the final image by your eyes and brain. While technical in nature, the tutorials include excellent images and diagrams that clearly illustrate complex topics. An in depth understanding of how your camera works is certainly not a requirement for making great photos, but the more you know, the better chance you have of nailing that difficult shot or to bring out the best of each image in post-processing.

The following is a brief excerpt from the fist article in the tutorial section on how a digital camera sensor works:

A digital camera uses a sensor array of millions of tiny pixels in order to produce the final image. When you press your camera's shutter button and the exposure begins, each of these pixels has a "photosite" which is uncovered to collect and store photons in a cavity. Once the exposure finishes, the camera closes each of these photosites, and then tries to assess how many photons fell into each. The relative quantity of photons in each cavity are then sorted into various intensity levels, whose precision is determined by bit depth (0 - 255 for an 8-bit image).



Each cavity is unable to distinguish how much of each color has fallen in, so the above illustration would only be able to create grayscale images. To capture color images, each cavity has to have a filter placed over it which only allows penetration of a particular color of light. Virtually all current digital cameras can only capture one of the three primary colors in each cavity, and so they discard roughly 2/3 of the incoming light. As a result, the camera has to approximate the other two primary colors in order to have information about all three colors at every pixel. The most common type of color filter array is called a "Bayer array," shown below.



A Bayer array consists of alternating rows of red-green and green-blue filters. Notice how the Bayer array contains twice as many green as red or blue sensors. Each primary color does not receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. Redundancy with green pixels produces an image which appears less noisy and has finer detail than could be accomplished if each color were treated equally. This also explains why noise in the green channel is much less than for the other two primary colors (see "Understanding Image Noise" for an example).

March 5, 2008

Leaving Space Behind Moving Subjects - Composition


While generally an interesting site, I find the daily emails from Digital Photography School to be somewhat hit and miss. However, this succinct article provides some great photographic examples of when it's appropriate to break some of the established rules of composition.

When photographing a moving subject the generally acceptable compositional rule is to place the subject in the frame with space in front of it to give it room to move into (creating ‘active space‘).

This is said to give the image more balance and provides the viewer of the image an answer to the question ‘where is the subject going?’

However rules are meant to be broken and as with every rule there are times when it can be very effective to break this one also.